PatchworkOS  da8a090
A non-POSIX operating system.
Loading...
Searching...
No Matches
sched.h
Go to the documentation of this file.
1#pragma once
2
3#include <kernel/defs.h>
4#include <kernel/sched/wait.h>
5#include <kernel/sync/lock.h>
7
8#include <sys/list.h>
9#include <sys/proc.h>
10
11typedef struct process process_t;
12typedef struct thread thread_t;
13
14typedef struct sched sched_t;
15
16/**
17 * @brief The Earliest Eligible Virtual Deadline First (EEVDF) scheduler.
18 * @defgroup kernel_sched The EEVDF Scheduler
19 * @ingroup kernel
20 *
21 * The scheduler is responsible for allocating CPU time to threads, it does this in such a way to create the illusion
22 * that multiple threads are running simultaneously on a single CPU. Consider that a video is in reality just a series
23 * of still images, rapidly displayed one after the other. The scheduler works in the same way, rapidly switching
24 * between threads to give the illusion of simultaneous execution.
25 *
26 * PatchworkOS uses the Earliest Eligible Virtual Deadline First (EEVDF) algorithm for its scheduler, which is a
27 * proportional share scheduling algorithm that aims to fairly distribute CPU time among threads based on their weights.
28 * This is in contrast to more traditional scheduling algorithms like round-robin or priority queues.
29 *
30 * The algorithm is relatively simple conceptually, but it is also very fragile, even small mistakes can easily result
31 * in highly unfair scheduling. Therefore, if you find issues or bugs with the scheduler, please open an issue in the
32 * GitHub repository.
33 *
34 * Included below is an overview of how the scheduler works and the relevant concepts. If you are unfamiliar with
35 * mathematical notation, don't worry, we will explain everything in plain English as well.
36 *
37 * ## Weight and Priority
38 *
39 * First, we need to assign each thread a "weight", denoted as \f$w_i\f$ where \f$i\f$ uniquely identifies the thread
40 * and, for completeness, let's define the set \f$A(t)\f$ which contains all active threads at real time \f$t\f$. To
41 * simplify, for thread \f$i\f$, its weight is \f$w_i\f$.
42 *
43 * A thread's weight is calculated as the sum of the process's priority and a constant `SCHED_WEIGHT_BASE`, the constant
44 * is needed to ensure that all threads have a weight greater than zero, as that would result in division by zero errors
45 * later on.
46 *
47 * The weight is what determines the share of CPU time a thread ought to receive, with a higher weight receiving a
48 * larger share. Specifically, the fraction of CPU time a thread receives is proportional to its weight relative to the
49 * total weight of all active threads. This is implemented using "virtual time", as described below.
50 *
51 * @see [EEVDF](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
52 * page 2.
53 *
54 * ## Virtual Time
55 *
56 * The first relevant concept that the EEVDF algorithm introduces is "virtual time". Each scheduler maintains a "virtual
57 * clock" that runs at a rate inversely proportional to the total weight of all active threads (all threads in the
58 * runqueue). So, if the total weight is \f$10\f$ then each unit of virtual time corresponds to \f$10\f$ units of real
59 * CPU time.
60 *
61 * Each thread should receive an amount of real time equal to its weight for each virtual time unit that passes. For
62 * example, if we have two threads, A and B, with weights \f$2\f$ and \f$3\f$ respectively, then for every \f$1\f$ unit
63 * of virtual time, thread A should receive \f$2\f$ units of real time and thread B should receive \f$3\f$ units of real
64 * time. Which is equivalent to saying that for every \f$5\f$ units of real time, thread A should receive \f$2\f$ units
65 * of real time and thread B should receive \f$3\f$ units of real time.
66 *
67 * Using this definition of virtual time, we can determine the amount of virtual time \f$v\f$ that has passed between
68 * two points in real time \f$t_1\f$ and \f$t_2\f$ as
69 *
70 * \begin{equation*}
71 * v = \frac{t_2 - t_1}{\sum_{i \in A(t_2)} w_i}
72 * \end{equation*}
73 *
74 * under the assumption that \f$A(t_1) = A(t_2)\f$, i.e. the set of active threads has not changed between \f$t_1\f$ and
75 * \f$t_2\f$.
76 *
77 * Note how the denominator containing the \f$\sum\f$ symbol evaluates to the sum of all weights \f$w_i\f$ for each
78 * active thread \f$i\f$ in \f$A\f$ at \f$t_2\f$, i.e. the total weight of the scheduler cached in `sched->totalWeight`.
79 * In pseudocode, this can be expressed as
80 *
81 * ```
82 * vclock_t vtime = (sys_time_uptime() - oldTime) / sched->totalWeight;
83 * ```
84 *
85 * Additionally, the amount of real time a thread should receive \f$r_i\f$ in a given duration of virtual time \f$v\f$
86 * can be calculated as
87 *
88 * \begin{equation*}
89 * r_i = v \cdot w_i.
90 * \end{equation*}
91 *
92 * In practice, all we are doing is taking a duration of real time equal to the total weight of all active threads, and
93 * saying that each thread ought to receive a portion of that time equal to its weight. Virtual time is just a trick to
94 * simplify the math.
95 *
96 * @note All variables storing virtual time values will be prefixed with 'v' and use the `vclock_t` type. Variables
97 * storing real time values will use the `clock_t` type as normal.
98 *
99 * @see [EEVDF](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
100 * pages 8-9.
101 *
102 * ## Lag
103 *
104 * Now we can move on to the metrics used to select threads. There are, as the name "Earliest Eligible Virtual Deadline
105 * First" suggests, two main concepts relevant to this process. Its "eligibility" and its "virtual deadline". We will
106 * start with "eligibility", which is determined by the concept of "lag".
107 *
108 * The lag \f$l_i\f$ of a thread \f$i\f$ is defined as
109 *
110 * \begin{equation*}
111 * l_i = r_{i}^{should} - r_{i}^{actual}
112 * \end{equation*}
113 *
114 * where \f$r_{i}^{should}\f$ is the amount of real time thread \f$i\f$ should have received and \f$r_{i}^{actual}\f$ is
115 * the amount of real time thread \f$i\f$ has actually received.
116 *
117 * If we assume that all real time is used by the threads in \f$A(t)\f$, which is always true, and that all real time is
118 * allocated among these threads, we can see that
119 *
120 * \begin{equation*}
121 * \sum_{i \in A(t)} \left(r_{i}^{should} - r_{i}^{actual}\right) = 0
122 * \end{equation*}
123 *
124 * or in other words, the sum of all lag values across all active threads is always zero.
125 *
126 * As an example, let's say we have three threads A, B and C with equal weights. To start with each thread is supposed
127 * to have run for 0ms, and has actually run for 0ms, so their lag values are:
128 *
129 * <div align="center">
130 * Thread | Lag (ms)
131 * -------|-------
132 * A | 0
133 * B | 0
134 * C | 0
135 * </div>
136 *
137 * Now, let's say we give a 30ms (in real time) time slice to thread A, while threads B and C do not run at all. After
138 * this, the lag values would be:
139 *
140 * <div align="center">
141 * Thread | Lag (ms)
142 * -------|-------
143 * A | -20
144 * B | 10
145 * C | 10
146 * </div>
147 *
148 * What just happened is that each thread should have received one third of the real time (since they are all of equal
149 * weight such that each of their weights is 1/3 of the total weight) which is 10ms. Therefore, since thread A actually
150 * received 30ms of real time, it has run for 20ms more than it should have. Meanwhile, threads B and C have not
151 * received any real time at all, so they are "owed" 10ms each.
152 *
153 * Additionally notice that \f$0 + 0 + 0 = 0\f$ and \f$-20 + 10 + 10 = 0\f$, i.e. the sum of all lag values is still
154 * zero.
155 *
156 * Finally, this lets us determine the eligibility of a thread. A thread is considered eligible if, and only if, its lag
157 * is greater than or equal to zero. In the above example threads B and C are eligible to run, while thread A is not.
158 * Notice that due to the sum of all lag values being zero, this means that there will always be at least one eligible
159 * thread as long as there is at least one active thread, since if there is a thread with negative lag then there must
160 * be at least one thread with positive lag to balance it out.
161 *
162 * @note Fairness is achieved over some long period of time over which the proportion of real time each thread has
163 * received will converge to the share it ought to receive. It does not guarantee that each individual time slice is
164 * exactly correct, hence it's acceptable for thread A to receive 30ms of real time in the above example.
165 *
166 * @see [EEVDF](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
167 * pages 3-5.
168 * @see [Completing the EEVDF Scheduler](https://lwn.net/Articles/969062/).
169 *
170 * ## Eligible Time
171 *
172 * In most cases, it's undesirable to track lag directly as it would require updating the lag of all threads whenever
173 * the scheduler's virtual time is updated, which would violate the desired \f$O(\log n)\f$ complexity of the scheduler.
174 *
175 * Instead, EEVDF defines the concept of "eligible time" as the virtual time at which a thread's lag
176 * becomes zero, which is equivalent to the virtual time at which the thread becomes eligible to run.
177 *
178 * When a thread enters the scheduler for the first time, its eligible time \f$v_{ei}\f$ is the current virtual time of
179 * the scheduler, which is equivalent to a lag of \f$0\f$. Whenever the thread runs, its eligible time is advanced by
180 * the amount of virtual time corresponding to the real time it has used. This can be calculated as
181 *
182 * \begin{equation*}
183 * v_{ei} = v_{ei} + \frac{t_{used}}{w_i}
184 * \end{equation*}
185 *
186 * where \f$t_{used}\f$ is the amount of real time the thread has used, and \f$w_i\f$ is the thread's weight.
187 *
188 * @see [EEVDF](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
189 * pages 10-12 and 14.
190 *
191 * ## Virtual Deadlines
192 *
193 * We can now move on to the other part of the name, "virtual deadline", which is defined as the earliest time at which
194 * a thread should have received its due share of CPU time, rounded to some quantum. The scheduler always selects the
195 * eligible thread with the earliest virtual deadline to run next.
196 *
197 * We can calculate the virtual deadline \f$v_{di}\f$ of a thread as
198 *
199 * \begin{equation*}
200 * v_{di} = v_{ei} + \frac{Q}{w_i}
201 * \end{equation*}
202 *
203 * where \f$Q\f$ is a constant time slice defined by the scheduler, in our case `CONFIG_TIME_SLICE`.
204 *
205 * @see [EEVDF](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
206 * page 3.
207 *
208 * ## Rounding Errors
209 *
210 * Before describing the implementation, it is important to note that due to the nature of integer division, rounding
211 * errors are inevitable when calculating virtual time and lag.
212 *
213 * For example, when computing \f$10/3 = 3.333...\f$ we instead get \f$3\f$, losing the fractional part. Over time,
214 * these small errors can accumulate and lead to unfair scheduling.
215 *
216 * It might be tempting to use floating point to mitigate these errors, however using floating point in a kernel is
217 * generally considered very bad practice, only user space should, ideally, be using floating point.
218 *
219 * Instead, we use a simple technique to mitigate the impact of rounding errors. We represent virtual time and lag using
220 * 128-bit fixed-point arithmetic, where the lower 63 bits represent the fractional part.
221 *
222 * There were two reasons for the decision to use 128 bits over 64 bits despite the performance cost. First, it means
223 * that even the maximum possible value of uptime, stored using 64 bits, can still be represented in the fixed-point
224 * format without overflowing the integer part, meaning we don't need to worry about overflow at all.
225 *
226 * Second, testing shows that lag appears to accumulate an error of about \f$10^3\f$ to \f$10^4\f$ in the fractional
227 * part every second under heavy load, meaning that using 64 bits and a fixed point offset of 20 bits, would result in
228 * an error of approximately 1 nanosecond per minute, considering that the testing was not particularly rigorous, it
229 * might be significantly worse in practice. Note that at most every division can create an error equal to the divider
230 * minus one in the fractional part.
231 *
232 * If we instead use 128 bits with a fixed point offset of 63 bits, the same error of \f$10^4\f$ in the fractional part
233 * results in an error of approximately \f$1.7 \cdot 10^{-9}\f$ nanoseconds per year, which is obviously negligible
234 * even if the actual error is in reality several orders of magnitude worse.
235 *
236 * For comparisons between `vclock_t` values, we consider two values equal if the difference between their whole parts
237 * is less than or equal to `VCLOCK_EPSILON`.
238 *
239 * Some might feel concerned about the performance impact of using 128-bit arithmetic. However, consider that by using
240 * 128-bit arithmetic, we no longer need any other means of reducing rounding errors. We dont need to worry about
241 * remainders from divisions, dividing to the nearest integer instead of rounding down, etc. This not only simplifies
242 * the code drastically, making it more approachable, but it also means that, in practice, the performance impact is
243 * negligible. Its a very simple brute force solution, but simple does not mean bad.
244 *
245 * @see [Fixed Point Arithmetic](https://en.wikipedia.org/wiki/Fixed-point_arithmetic)
246 *
247 * ## Scheduling
248 *
249 * With the central concepts introduced, we can now describe how the scheduler works. As mentioned, the goal is to
250 * always run the eligible thread with the earliest virtual deadline. To achieve this, each scheduler maintains a
251 * runqueue in the form of a Red-Black tree sorted by each thread's virtual deadline.
252 *
253 * To select the next thread to run, we find the first eligible thread in the runqueue and switch to it. If no eligible
254 * thread is found (which means the runqueue is empty), we switch to the idle thread. This process is optimized by
255 * storing the minimum eligible time of each subtree in each node of the runqueue, allowing us to skip entire subtrees
256 * that do not contain any eligible threads.
257 *
258 * ## Preemption
259 *
260 * If, at any point in time, a thread with an earlier virtual deadline becomes available to run (for example, when a
261 * thread is unblocked), the scheduler will preempt the currently running thread and switch to the newly available
262 * thread.
263 *
264 * ## Idle Thread
265 *
266 * The idle thread is a special thread that is not considered active (not stored in the runqueue) and simply runs an
267 * infinite loop that halts the CPU while waiting for an interrupt signaling that a non-idle thread is available to run.
268 * Each CPU has its own idle thread.
269 *
270 * ## Load Balancing
271 *
272 * Each CPU has its own scheduler and associated runqueue, as such we need to balance the load between each CPU, ideally
273 * without causing too many cache misses. Meaning we want to keep threads which have recently run on a CPU on the same
274 * CPU when possible. As such, we define a thread to be "cache-cold" on a CPU if the time since it last ran on that CPU
275 * is greater than `CONFIG_CACHE_HOT_THRESHOLD`, otherwise its considered "cache-hot".
276 *
277 * We use two mechanisms to balance the load between CPUs, one push mechanism and one pull mechanism.
278 *
279 * The push mechanism, also called work stealing, is used when a thread is submitted to the scheduler, as in it was
280 * created or unblocked. In this case, if the thread is cache-cold then the thread will be added to the runqueue of the
281 * CPU with the lowest weight. Otherwise, it will be added to the runqueue of the CPU it last ran on.
282 *
283 * The pull mechanism is used when a CPU is about to become idle. The CPU will find the CPU with the highest weight and
284 * steal the first cache-cold thread from its runqueue. If no cache-cold threads are found, it will simply run the idle
285 * thread.
286 *
287 * @note The reason we want to avoid a global runqueue is to avoid lock contention. Even a small amount of lock
288 * contention in the scheduler will quickly degrade performance, as such it is only allowed to lock a single CPU's
289 * scheduler at a time. This does cause race conditions while pulling or pushing threads, but the worst case scenario is
290 * imperfect load balancing, which is acceptable.
291 *
292 * ## Testing
293 *
294 * The scheduler is tested using a combination of asserts and tests that are enabled in debug builds (`NDEBUG` not
295 * defined). These tests verify that the runqueue is sorted, that the lag does sum to zero (within a margin from
296 * rounding errors), and other invariants of the scheduler.
297 *
298 * ## References
299 *
300 * References were accessed on 2025-12-02.
301 *
302 * [Ion Stoica, Hussein Abdel-Wahab, "Earliest Eligible Virtual Deadline First", Old Dominion University,
303 * 1996.](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
304 *
305 * [Jonathan Corbet, "An EEVDF CPU scheduler for Linux", LWN.net, March 9, 2023.](https://lwn.net/Articles/925371/)
306 *
307 * [Jonathan Corbet, "Completing the EEVDF Scheduler", LWN.net, April 11, 2024.](https://lwn.net/Articles/969062/)
308 *
309 * @{
310 */
311
312/**
313 * @brief Virtual clock type.
314 * @typedef vclock_t
315 */
317
318/**
319 * @brief Lag type.
320 * @struct lag_t
321 */
323
324/**
325 * @brief The bits used for the fractional part of a virtual clock or lag value.
326 *
327 * One sign bit, 64 integer bits, 63 fractional bits.
328 */
329#define SCHED_FIXED_POINT (63LL)
330
331/**
332 * @brief Fixed-point zero.
333 */
334#define SCHED_FIXED_ZERO ((int128_t)0)
335
336/**
337 * @brief The minimum difference between two virtual clock or lag values to consider then unequal.
338 */
339#define SCHED_EPSILON (10LL)
340
341/**
342 * @brief Convert a regular integer to fixed-point representation.
343 */
344#define SCHED_FIXED_TO(x) (((int128_t)(x)) << SCHED_FIXED_POINT)
345
346/**
347 * @brief Convert a fixed-point value to a regular integer.
348 */
349#define SCHED_FIXED_FROM(x) ((int64_t)((int128_t)(x) >> SCHED_FIXED_POINT))
350
351/**
352 * @brief The maximum weight a thread can have.
353 */
354#define SCHED_WEIGHT_MAX (PRIORITY_MAX + SCHED_WEIGHT_BASE)
355
356/**
357 * @brief Base weight added to all threads.
358 *
359 * Used to prevent division by zero.
360 */
361#define SCHED_WEIGHT_BASE 1
362
363/**
364 * @brief Per-thread scheduler context.
365 * @struct sched_client_t
366 *
367 * Stored in a thread's `sched` member.
368 */
369typedef struct sched_client
370{
371 rbnode_t node; ///< The node in the scheduler's runqueue.
372 int64_t weight; ///< The weight of the thread.
373 /**
374 * The earliest virtual time at which the thread ought to have received its due share of CPU time.
375 */
377 vclock_t veligible; ///< The virtual time at which the thread becomes eligible to run (lag >= 0).
378 vclock_t vminEligible; ///< The minimum virtual eligible time of the subtree in the runqueue.
379 clock_t stop; ///< The real time when the thread previously stopped executing.
380 cpu_t* lastCpu; ///< The last CPU the thread was scheduled on, it stoped running at `stop` time.
382
383/**
384 * @brief Per-CPU scheduler.
385 * @struct sched_t
386 *
387 * Stored in a CPU's `sched` member.
388 *
389 * @note The `runThread` and `idleThread` members are declared `volatile` as they can be accessed in interrupt
390 * context as well as non-interrupt context.
391 */
392typedef struct sched
393{
394 _Atomic(int64_t) totalWeight; ///< The total weight of all threads in the runqueue, not protected by the lock.
395 rbtree_t runqueue; ///< Contains all runnable threads, including the currently running thread, sorted by vdeadline.
396 vclock_t vtime; ///< The current virtual time of the CPU.
397 clock_t lastUpdate; ///< The real time when the last vtime update occurred.
398 lock_t lock; ///< The lock protecting the scheduler.
399 thread_t* volatile idleThread; ///< The idle thread for this CPU.
400 thread_t* volatile runThread; ///< The currently running thread on this CPU.
401} sched_t;
402
403/**
404 * @brief Initialize the scheduler context for a thread.
405 *
406 * @param client The scheduler context to initialize.
407 */
409
410/**
411 * @brief Initialize the scheduler for a CPU.
412 *
413 * @param sched The scheduler to initialize.
414 */
415void sched_init(sched_t* sched);
416
417/**
418 * @brief Starts the scheduler by jumping to the boot thread.
419 *
420 * Will never return.
421 *
422 * @param bootThread The initial thread to switch to.
423 */
424_NORETURN void sched_start(thread_t* bootThread);
425
426/**
427 * @brief Submits a thread to the scheduler.
428 *
429 * If the thread has previously ran within `CONFIG_CACHE_HOT_THRESHOLD` nanoseconds, it will be submitted to the same
430 * CPU it last ran on, otherwise it will be submitted to the least loaded CPU.
431 *
432 * @param thread The thread to submit.
433 */
434void sched_submit(thread_t* thread);
435
436/**
437 * @brief Perform a scheduling operation.
438 *
439 * This function is called on every interrupt to provide a scheduling opportunity.
440 *
441 * @param frame The interrupt frame.
442 * @param self The cpu performing the scheduling operation.
443 */
444void sched_do(interrupt_frame_t* frame, cpu_t* self);
445
446/**
447 * @brief Checks if the CPU is currently idle.
448 *
449 * @param cpu The CPU to check.
450 * @return `true` if the CPU is idle, `false` otherwise.
451 */
452bool sched_is_idle(cpu_t* cpu);
453
454/**
455 * @brief Retrieves the currently running thread.
456 *
457 * @return The currently running thread.
458 */
460
461/**
462 * @brief Retrieves the process of the currently running thread.
463 *
464 * @note Will not increment the reference count of the returned process, as we consider the currently running thread to
465 * always be referencing its process.
466 *
467 * @return The process of the currently running thread.
468 */
470
471/**
472 * @brief Retrieves the currently running thread without disabling interrupts.
473 *
474 * @return The currently running thread.
475 */
477
478/**
479 * @brief Retrieves the process of the currently running thread without disabling interrupts.
480 *
481 * @note Will not increment the reference count of the returned process, as we consider the currently running thread to
482 * always be referencing its process.
483 *
484 * @return The process of the currently running thread.
485 */
487
488/**
489 * @brief Sleeps the current thread for a specified duration in nanoseconds.
490 *
491 * @param timeout The duration to sleep in nanoseconds.
492 * @return On success, `0`. On failure, `ERR` and `errno` is set.
493 */
495
496/**
497 * @brief Yield the current thread's time slice to allow other threads to run.
498 *
499 * @note Currently just sleeps for 1ms as we cant really yield without weird lag math.
500 *
501 */
502void sched_yield(void);
503
504/**
505 * @brief Terminates the currently executing process and all it's threads.
506 *
507 * @note Will never return, instead it triggers an interrupt that kills the current thread.
508 *
509 * @param status The exit status of the process.
510 */
512
513/**
514 * @brief Terminates the currently executing thread.
515 *
516 * @note Will never return, instead it triggers an interrupt that kills the thread.
517 */
519
520/**
521 * @brief The idle loop for the scheduler.
522 *
523 * This is where idle threads will run when there is nothing else to do.
524 */
525_NORETURN extern void sched_idle_loop(void);
526
527/** @} */
_NORETURN void sched_process_exit(int32_t status)
Terminates the currently executing process and all it's threads.
Definition sched.c:648
void sched_init(sched_t *sched)
Initialize the scheduler for a CPU.
Definition sched.c:324
process_t * sched_process(void)
Retrieves the process of the currently running thread.
Definition sched.c:620
void sched_client_init(sched_client_t *client)
Initialize the scheduler context for a thread.
Definition sched.c:102
int128_t vclock_t
Virtual clock type.
Definition sched.h:316
_NORETURN void sched_start(thread_t *bootThread)
Starts the scheduler by jumping to the boot thread.
Definition sched.c:350
thread_t * sched_thread(void)
Retrieves the currently running thread.
Definition sched.c:612
void sched_yield(void)
Yield the current thread's time slice to allow other threads to run.
Definition sched.c:643
thread_t * sched_thread_unsafe(void)
Retrieves the currently running thread without disabling interrupts.
Definition sched.c:626
bool sched_is_idle(cpu_t *cpu)
Checks if the CPU is currently idle.
Definition sched.c:604
void sched_submit(thread_t *thread)
Submits a thread to the scheduler.
Definition sched.c:375
_NORETURN void sched_idle_loop(void)
The idle loop for the scheduler.
uint64_t sched_nanosleep(clock_t timeout)
Sleeps the current thread for a specified duration in nanoseconds.
Definition sched.c:638
int128_t lag_t
Definition sched.h:322
process_t * sched_process_unsafe(void)
Retrieves the process of the currently running thread without disabling interrupts.
Definition sched.c:632
void sched_do(interrupt_frame_t *frame, cpu_t *self)
Perform a scheduling operation.
Definition sched.c:521
_NORETURN void sched_thread_exit(void)
Terminates the currently executing thread.
Definition sched.c:657
__UINT64_TYPE__ clock_t
A nanosecond time.
Definition clock_t.h:13
#define _NORETURN
Definition config.h:28
__INT32_TYPE__ int32_t
Definition stdint.h:14
__UINT64_TYPE__ uint64_t
Definition stdint.h:17
__int128_t int128_t
Definition stdint.h:169
__INT64_TYPE__ int64_t
Definition stdint.h:16
CPU structure.
Definition cpu.h:122
Trap Frame Structure.
Definition interrupt.h:143
A simple ticket lock implementation.
Definition lock.h:43
Process structure.
Definition process.h:158
Red-Black Tree Node.
Definition rbtree.h:93
Red-Black Tree.
Definition rbtree.h:138
Per-thread scheduler context.
Definition sched.h:370
clock_t stop
The real time when the thread previously stopped executing.
Definition sched.h:379
vclock_t vdeadline
Definition sched.h:376
vclock_t veligible
The virtual time at which the thread becomes eligible to run (lag >= 0).
Definition sched.h:377
vclock_t vminEligible
The minimum virtual eligible time of the subtree in the runqueue.
Definition sched.h:378
rbnode_t node
The node in the scheduler's runqueue.
Definition sched.h:371
cpu_t * lastCpu
The last CPU the thread was scheduled on, it stoped running at stop time.
Definition sched.h:380
int64_t weight
The weight of the thread.
Definition sched.h:372
Per-CPU scheduler.
Definition sched.h:393
vclock_t vtime
The current virtual time of the CPU.
Definition sched.h:396
lock_t lock
The lock protecting the scheduler.
Definition sched.h:398
clock_t lastUpdate
The real time when the last vtime update occurred.
Definition sched.h:397
thread_t *volatile runThread
The currently running thread on this CPU.
Definition sched.h:400
rbtree_t runqueue
Contains all runnable threads, including the currently running thread, sorted by vdeadline.
Definition sched.h:395
_Atomic(int64_t) totalWeight
The total weight of all threads in the runqueue, not protected by the lock.
thread_t *volatile idleThread
The idle thread for this CPU.
Definition sched.h:399
Thread of execution structure.
Definition thread.h:63