Commit Graph

183 Commits

Author SHA1 Message Date
Ashwin Naren
379b7ffab8 Refactor vruntime to v_runtime 2025-12-29 12:39:58 -08:00
Matthew Leach
992fe21844 Merge pull request #113 from hexagonal-sun/make-task-list-arc-task
process: `TASK_LIST`: point to `Task` struct
2025-12-29 20:38:56 +00:00
Ashwin Naren
c816054d36 Support cwd symlink in procfs (#101)
Support cwd symlink in procfs
2025-12-29 20:35:31 +00:00
Matthew Leach
9e80a6ae8a process: TASK_LIST: point to Task struct
Make the global `TASK_LIST` struct be a collection of `Task`s, rather
than `task.state` struct members. This allows other cores to access to
any shared task state easily.
2025-12-29 20:28:00 +00:00
Ashwin Naren
74bc44a317 update dependencies 2025-12-29 12:03:32 -08:00
Ashwin Naren
3284b04197 fix build script 2025-12-29 11:54:43 -08:00
Ashwin Naren
9fc6ea6662 fix dockerfile 2025-12-29 11:54:43 -08:00
Ashwin Naren
ddd5b0d461 test usertests on CI 2025-12-29 11:54:43 -08:00
Ashwin Naren
937adb12d0 dockerfile 2025-12-29 11:54:43 -08:00
Matthew Leach
02586457f1 Merge pull request #91 from arihant2math/multicore-sched
Schedule onto multiple cores in round-robin fashion
2025-12-29 16:20:53 +00:00
Matthew Leach
9f0bf1f689 process: mod: last_cpu: use CpuId
Use the new `CpuId` type for the `last_cpu` field in `Task`.
2025-12-29 16:19:49 +00:00
Matthew Leach
bbde9f04aa sched: remove unused function sched_yield 2025-12-28 23:51:51 -08:00
Matthew Leach
108b580e83 sched: fix formatting
Fix formatting in the sched module to keep CI happy.
2025-12-28 23:51:51 -08:00
Matthew Leach
e2e7cdaeec timer: use a per-cpu wakeup queue
Currently, a global wakeup queue is used for all CPUs on the system.
This leads to inefficient behavior regarding preemption. When the
scheduler requests a preemption event, it is inserted into a global list
alongside events from all other CPUs.

When processing IRQs, there is no guarantee which CPU will handle the
timer interrupt. If the current CPU processes a preemption event
intended for a different CPU, it must signal the target CPU via an IPI.
This causes a severe bottleneck, as one CPU may end up distributing
preemption events for the entire system.

Fix this by implementing a per-cpu wakeup queue. Preemption events are
now strictly scheduled for the current CPU, ensuring they are handled
locally by the core that scheduled them. This significantly simplifies
the preemption logic and eliminates the need for IPIs to signal
preemption events.
2025-12-28 23:51:51 -08:00
Ashwin Naren
4fedf19e51 Optimize current-cpu case in task insertion 2025-12-28 23:51:51 -08:00
Ashwin Naren
0f1a486abb Force preemption of idle task 2025-12-28 23:51:51 -08:00
Ashwin Naren
018c1d9450 fix preemption not being scheduled 2025-12-28 23:51:51 -08:00
Ashwin Naren
b818047a8a fix deadlock in interrupt handler 2025-12-28 23:51:51 -08:00
Ashwin Naren
b0d214d3de update debug statements 2025-12-28 23:51:51 -08:00
Ashwin Naren
45135317df fix deadline guard to minimize overhead 2025-12-28 23:51:51 -08:00
Ashwin Naren
0227e0dc9a Preempt IPI 2025-12-28 23:51:51 -08:00
Matthew Leach
116a1adbd0 clone: add all tasks to process task list
This prevents a bug where `sys_exit` calls `exit_group` for the thread's
process, even when there are still active threads.
2025-12-28 23:51:51 -08:00
Matthew Leach
0f566f37e7 sched: remove incorrect assertion check
When switching tasks, we may well be swithing away from a task which is
going to `Sleep`. Therefore the check

```rust
 debug_assert_eq!(*prev_task.state.lock_save_irq(), TaskState::Runnable);
```

Is incorrect.
2025-12-28 23:51:51 -08:00
Ashwin Naren
ca8555283b use messengers
# Conflicts:
#	libkernel/src/sync/per_cpu.rs
2025-12-28 23:51:51 -08:00
Ashwin Naren
0a3d0851ee include last run CPU 2025-12-28 23:51:51 -08:00
Ashwin Naren
39d0bba0c6 EEVDF improvements
Use EEVDF concepts like virtual deadline correctly and actually calculate the necessary deadline and use it to schedule.

Also dynamically preempts based on the deadline.
2025-12-28 23:51:51 -08:00
Ashwin Naren
5e553096dd try fix scheduling on non-main cores 2025-12-28 23:49:33 -08:00
Ashwin Naren
b75f29804f schedule in round-robin fashion 2025-12-28 23:49:33 -08:00
Matthew Leach
7aecc6fecd interrupts: refactor interrupt manager and reduce locking
There is no need to store a seperate inner struct with the interrupt
manager, refactor that.

Also, reduce the amount of locking when servicing an interrupt,
currently we keep the whole interrupt manager locked when servicing an
interrupt. This should be kept unlocked while the ISR is called.
2025-12-28 23:48:43 -08:00
Matthew Leach
3ffb3b2a80 memory: fault: handle colliding page faults
In an SMP environment, two threads sharing an address space may trigger
a page fault on the same address simultaneously. Previously, the loser
of this race would receive an `AlreadyMapped` error from the page table
mapper, causing the kernel to treat a valid execution flow as an error.

This patch modifies `handle_demand_fault` to gracefully handle these
spurious faults by:

1. Accepting `AlreadyMapped` as a successful resolution. If another CPU
has already mapped the page while we were waiting for the lock
(or performing I/O):, we consider the fault handled.

2. Fixing a memory leak in the race path. We now only `leak()` the
allocated `ClaimedPage` (surrendering ownership to the page tables) if
the mapping actually succeeds. If we lose the race, the `ClaimedPage` is
allowed to go out of scope, causing the `Drop` impl to return the unused
physical frame to the allocator.

3. Applying this logic to both the anonymous mapping path and the
deferred file-backed path.
2025-12-28 23:42:19 -08:00
Matthew Leach
1dd2b7b39d arm64: memory: heap: use irq-saving spinlock
The `LockedHeap` type provided by `linked_list` doesn't disable
interrupts when modifying the heap. This causes allocations to
potentially deadlock when allocations occur during an ISR.

Fix this by wrapping the `Heap` in our interrupt-aware spinlock.
2025-12-28 23:39:27 -08:00
Matthew Leach
9c8571c2f7 per_cpu: make reentrant-safe
Disable interrupts on per_cpu guards, ensuring that critical-sections
are reentrant-safe.
2025-12-28 23:38:24 -08:00
Matthew Leach
cdb9a73297 futex: only access timeout on _WAIT ops
The `timeout` parameter is only used for `_WAIT` futex ops. For other
ops, the `timeout` parameter is permitted to be an undefied value. The
current implementation would then try to `copy_from_user` using the
garbage pointer and fault, causing a missed wake-up and deadlock the
calling process.

Fix this by only accessing the timeout parmeter for `_WAIT` futex ops
where the parameter's value must be valid.
2025-12-28 23:38:03 -08:00
Matthew Leach
73431e3a81 Merge pull request #100 from some100/master
Implement sys_syncfs, sys_fsync, and sys_fdatasync
2025-12-28 21:23:55 +00:00
ootinnyoo
caf1d923c8 implement sys_syncfs, sys_fsync, and sys_fdatasync 2025-12-28 15:55:12 -05:00
ootinnyoo
d2723b716c implement sys_statx 2025-12-28 10:41:15 -08:00
Matthew Leach
8bf592d86f Merge pull request #97 from some100/master
Implement pread* and pwrite* syscalls
2025-12-27 06:43:29 +00:00
Matthew Leach
19a0eabfa4 Merge pull request #96 from arihant2math/add-lldbinit
Add .lldbinit file
2025-12-27 06:36:59 +00:00
ootinnyoo
bada97e048 implement pread* and pwrite* syscalls 2025-12-26 21:45:40 -05:00
Ashwin Naren
de7ae3662b add .lldbinit file 2025-12-26 18:12:49 -08:00
ootinnyoo
e31d1a05e8 move emptiness check to fs 2025-12-26 16:52:08 -08:00
ootinnyoo
aa29951c2d implement sys_renameat and sys_renameat2 2025-12-26 16:52:08 -08:00
ootinnyoo
ece2feaf21 implement proper AtFlags handling 2025-12-25 21:23:04 -08:00
ootinnyoo
2727f640d8 implement sys_utimensat 2025-12-25 10:33:16 -08:00
ootinnyoo
8030cf0d8f add readlink test in symlink test 2025-12-25 00:46:40 -08:00
ootinnyoo
70e81b39f4 add support for symlinks 2025-12-25 00:46:40 -08:00
Matthew Leach
69f50fef18 sched: fix futures race-condition
When a future returns `Poll::Pending`, there is a window where, if a
waker is called, prior to the sched code setting the task's state to
`Sleeping`, the wake-up could be lost.  We get around this by
introducing a new state `Woken`.

A waker will set a `Running` task to this state. The sched code then
detects this and *does not* set the task's state to `Sleeping`, instead
it leaves it as running and attempts to re-schedule.
2025-12-24 18:44:41 -08:00
Matthew Leach
bd21276368 sched: don't bother with pointless state update
The caller of `switch_to_task` should have already set the task's state
to `Runnable` for it to be considered in this time slice. Ensure that is
the case with a debug_assert.
2025-12-24 18:44:41 -08:00
Matthew Leach
2e8871840d sched: ensure task is running when re-running same task
When scheduling the same task to be run, ensure that the state is set to
`Running` as it will have been set to `Runnable` before entering this
function call.
2025-12-24 18:44:41 -08:00
Matthew Leach
6abdbbb6d5 sched: move last_run update into switch_to_task 2025-12-24 18:44:41 -08:00