PatchworkOS  19e446b
A non-POSIX operating system.
Loading...
Searching...
No Matches
sched.h
Go to the documentation of this file.
1#pragma once
2
3#include <kernel/sched/wait.h>
4#include <kernel/sync/lock.h>
6#include <sys/defs.h>
7
8#include <sys/list.h>
9#include <sys/proc.h>
10
11typedef struct process process_t;
12typedef struct thread thread_t;
13
14typedef struct sched sched_t;
15
16/**
17 * @brief The Earliest Eligible Virtual Deadline First (EEVDF) scheduler.
18 * @defgroup kernel_sched The EEVDF Scheduler
19 * @ingroup kernel
20 *
21 * The scheduler is responsible for allocating CPU time to threads, it does this in such a way to create the illusion
22 * that multiple threads are running simultaneously on a single CPU. Consider that a video is in reality just a series
23 * of still images, rapidly displayed one after the other. The scheduler works in the same way, rapidly switching
24 * between threads to give the illusion of simultaneous execution.
25 *
26 * PatchworkOS uses the Earliest Eligible Virtual Deadline First (EEVDF) algorithm for its scheduler, which is a
27 * proportional share scheduling algorithm that aims to fairly distribute CPU time among threads based on their weights.
28 * This is in contrast to more traditional scheduling algorithms like round-robin or priority queues.
29 *
30 * The algorithm is relatively simple conceptually, but it is also very fragile, even small mistakes can easily result
31 * in highly unfair scheduling. Therefore, if you find issues or bugs with the scheduler, please open an issue in the
32 * GitHub repository.
33 *
34 * Included below is an overview of how the scheduler works and the relevant concepts. If you are unfamiliar with
35 * mathematical notation, don't worry, we will explain everything in plain English as well.
36 *
37 * ## Weight and Priority
38 *
39 * First, we need to assign each thread a "weight", denoted as \f$w_i\f$ where \f$i\f$ uniquely identifies the thread
40 * and, for completeness, let's define the set \f$A(t)\f$ which contains all active threads at real time \f$t\f$. To
41 * simplify, for thread \f$i\f$, its weight is \f$w_i\f$.
42 *
43 * A thread's weight is calculated as the sum of the process's priority and a constant `SCHED_WEIGHT_BASE`, the constant
44 * is needed to ensure that all threads have a weight greater than zero, as that would result in division by zero errors
45 * later on.
46 *
47 * The weight is what determines the share of CPU time a thread ought to receive, with a higher weight receiving a
48 * larger share. Specifically, the fraction of CPU time a thread receives is proportional to its weight relative to the
49 * total weight of all active threads. This is implemented using "virtual time", as described below.
50 *
51 * @see [EEVDF](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
52 * page 2.
53 *
54 * ## Virtual Time
55 *
56 * The first relevant concept that the EEVDF algorithm introduces is "virtual time". Each scheduler maintains a "virtual
57 * clock" that runs at a rate inversely proportional to the total weight of all active threads (all threads in the
58 * runqueue). So, if the total weight is \f$10\f$ then each unit of virtual time corresponds to \f$10\f$ units of real
59 * CPU time.
60 *
61 * Each thread should receive an amount of real time equal to its weight for each virtual time unit that passes. For
62 * example, if we have two threads, A and B, with weights \f$2\f$ and \f$3\f$ respectively, then for every \f$1\f$ unit
63 * of virtual time, thread A should receive \f$2\f$ units of real time and thread B should receive \f$3\f$ units of real
64 * time. Which is equivalent to saying that for every \f$5\f$ units of real time, thread A should receive \f$2\f$ units
65 * of real time and thread B should receive \f$3\f$ units of real time.
66 *
67 * Using this definition of virtual time, we can determine the amount of virtual time \f$v\f$ that has passed between
68 * two points in real time \f$t_1\f$ and \f$t_2\f$ as
69 *
70 * \begin{equation*}
71 * v = \frac{t_2 - t_1}{\sum_{i \in A(t_2)} w_i}
72 * \end{equation*}
73 *
74 * under the assumption that \f$A(t_1) = A(t_2)\f$, i.e. the set of active threads has not changed between \f$t_1\f$ and
75 * \f$t_2\f$.
76 *
77 * Note how the denominator containing the \f$\sum\f$ symbol evaluates to the sum of all weights \f$w_i\f$ for each
78 * active thread \f$i\f$ in \f$A\f$ at \f$t_2\f$, i.e. the total weight of the scheduler cached in `sched->totalWeight`.
79 * In pseudocode, this can be expressed as
80 *
81 * ```
82 * vclock_t vtime = (clock_uptime() - oldTime) / sched->totalWeight;
83 * ```
84 *
85 * Additionally, the amount of real time a thread should receive \f$r_i\f$ in a given duration of virtual time \f$v\f$
86 * can be calculated as
87 *
88 * \begin{equation*}
89 * r_i = v \cdot w_i.
90 * \end{equation*}
91 *
92 * In practice, all we are doing is taking a duration of real time equal to the total weight of all active threads, and
93 * saying that each thread ought to receive a portion of that time equal to its weight. Virtual time is just a trick to
94 * simplify the math.
95 *
96 * @note All variables storing virtual time values will be prefixed with 'v' and use the `vclock_t` type. Variables
97 * storing real time values will use the `clock_t` type as normal.
98 *
99 * @see [EEVDF](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
100 * pages 8-9.
101 *
102 * ## Lag
103 *
104 * Now we can move on to the metrics used to select threads. There are, as the name "Earliest Eligible Virtual Deadline
105 * First" suggests, two main concepts relevant to this process. Its "eligibility" and its "virtual deadline". We will
106 * start with "eligibility", which is determined by the concept of "lag".
107 *
108 * The lag \f$l_i\f$ of a thread \f$i\f$ is defined as
109 *
110 * \begin{equation*}
111 * l_i = r_{i}^{should} - r_{i}^{actual}
112 * \end{equation*}
113 *
114 * where \f$r_{i}^{should}\f$ is the amount of real time thread \f$i\f$ should have received and \f$r_{i}^{actual}\f$ is
115 * the amount of real time thread \f$i\f$ has actually received.
116 *
117 * If we assume that all real time is used by the threads in \f$A(t)\f$, which is always true, and that all real time is
118 * allocated among these threads, we can see that
119 *
120 * \begin{equation*}
121 * \sum_{i \in A(t)} \left(r_{i}^{should} - r_{i}^{actual}\right) = 0
122 * \end{equation*}
123 *
124 * or in other words, the sum of all lag values across all active threads is always zero.
125 *
126 * As an example, let's say we have three threads A, B and C with equal weights. To start with each thread is supposed
127 * to have run for 0ms, and has actually run for 0ms, so their lag values are:
128 *
129 * <div align="center">
130 * Thread | Lag (ms)
131 * -------|-------
132 * A | 0
133 * B | 0
134 * C | 0
135 * </div>
136 *
137 * Now, let's say we give a 30ms (in real time) time slice to thread A, while threads B and C do not run at all. After
138 * this, the lag values would be:
139 *
140 * <div align="center">
141 * Thread | Lag (ms)
142 * -------|-------
143 * A | -20
144 * B | 10
145 * C | 10
146 * </div>
147 *
148 * What just happened is that each thread should have received one third of the real time (since they are all of equal
149 * weight such that each of their weights is 1/3 of the total weight) which is 10ms. Therefore, since thread A actually
150 * received 30ms of real time, it has run for 20ms more than it should have. Meanwhile, threads B and C have not
151 * received any real time at all, so they are "owed" 10ms each.
152 *
153 * Additionally notice that \f$0 + 0 + 0 = 0\f$ and \f$-20 + 10 + 10 = 0\f$, i.e. the sum of all lag values is still
154 * zero.
155 *
156 * Finally, this lets us determine the eligibility of a thread. A thread is considered eligible if, and only if, its lag
157 * is greater than or equal to zero. In the above example threads B and C are eligible to run, while thread A is not.
158 * Notice that due to the sum of all lag values being zero, this means that there will always be at least one eligible
159 * thread as long as there is at least one active thread, since if there is a thread with negative lag then there must
160 * be at least one thread with positive lag to balance it out.
161 *
162 * @note Fairness is achieved over some long period of time over which the proportion of real time each thread has
163 * received will converge to the share it ought to receive. It does not guarantee that each individual time slice is
164 * exactly correct, hence it's acceptable for thread A to receive 30ms of real time in the above example.
165 *
166 * @see [EEVDF](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
167 * pages 3-5.
168 * @see [Completing the EEVDF Scheduler](https://lwn.net/Articles/969062/).
169 *
170 * ## Eligible Time
171 *
172 * In most cases, it's undesirable to track lag directly as it would require updating the lag of all threads whenever
173 * the scheduler's virtual time is updated, which would violate the desired \f$O(\log n)\f$ complexity of the scheduler.
174 *
175 * Instead, EEVDF defines the concept of "eligible time" as the virtual time at which a thread's lag
176 * becomes zero, which is equivalent to the virtual time at which the thread becomes eligible to run.
177 *
178 * When a thread enters the scheduler for the first time, its eligible time \f$v_{ei}\f$ is the current virtual time of
179 * the scheduler, which is equivalent to a lag of \f$0\f$. Whenever the thread runs, its eligible time is advanced by
180 * the amount of virtual time corresponding to the real time it has used. This can be calculated as
181 *
182 * \begin{equation*}
183 * v_{ei} = v_{ei} + \frac{t_{used}}{w_i}
184 * \end{equation*}
185 *
186 * where \f$t_{used}\f$ is the amount of real time the thread has used, and \f$w_i\f$ is the thread's weight.
187 *
188 * @see [EEVDF](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
189 * pages 10-12 and 14.
190 *
191 * ## Virtual Deadlines
192 *
193 * We can now move on to the other part of the name, "virtual deadline", which is defined as the earliest time at which
194 * a thread should have received its due share of CPU time, rounded to some quantum. The scheduler always selects the
195 * eligible thread with the earliest virtual deadline to run next.
196 *
197 * We can calculate the virtual deadline \f$v_{di}\f$ of a thread as
198 *
199 * \begin{equation*}
200 * v_{di} = v_{ei} + \frac{Q}{w_i}
201 * \end{equation*}
202 *
203 * where \f$Q\f$ is a constant time slice defined by the scheduler, in our case `CONFIG_TIME_SLICE`.
204 *
205 * @see [EEVDF](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
206 * page 3.
207 *
208 * ## Rounding Errors
209 *
210 * Before describing the implementation, it is important to note that due to the nature of integer division, rounding
211 * errors are inevitable when calculating virtual time and lag.
212 *
213 * For example, when computing \f$10/3 = 3.333...\f$ we instead get \f$3\f$, losing the fractional part. Over time,
214 * these small errors can accumulate and lead to unfair scheduling.
215 *
216 * It might be tempting to use floating point to mitigate these errors, however using floating point in a kernel is
217 * generally considered very bad practice, only user space should, ideally, be using floating point.
218 *
219 * Instead, we use a simple technique to mitigate the impact of rounding errors. We represent virtual time and lag using
220 * 128-bit fixed-point arithmetic, where the lower 63 bits represent the fractional part.
221 *
222 * There were two reasons for the decision to use 128 bits over 64 bits despite the performance cost. First, it means
223 * that even the maximum possible value of uptime, stored using 64 bits, can still be represented in the fixed-point
224 * format without overflowing the integer part, meaning we don't need to worry about overflow at all.
225 *
226 * Second, testing shows that lag appears to accumulate an error of about \f$10^3\f$ to \f$10^4\f$ in the fractional
227 * part every second under heavy load, meaning that using 64 bits and a fixed point offset of 20 bits, would result in
228 * an error of approximately 1 nanosecond per minute, considering that the testing was not particularly rigorous, it
229 * might be significantly worse in practice. Note that at most every division can create an error equal to the divider
230 * minus one in the fractional part.
231 *
232 * If we instead use 128 bits with a fixed point offset of 63 bits, the same error of \f$10^4\f$ in the fractional part
233 * results in an error of approximately \f$1.7 \cdot 10^{-9}\f$ nanoseconds per year, which is obviously negligible
234 * even if the actual error is in reality several orders of magnitude worse.
235 *
236 * For comparisons between `vclock_t` values, we consider two values equal if the difference between their whole parts
237 * is less than or equal to `VCLOCK_EPSILON`.
238 *
239 * Some might feel concerned about the performance impact of using 128-bit arithmetic. However, consider that by using
240 * 128-bit arithmetic, we no longer need any other means of reducing rounding errors. We dont need to worry about
241 * remainders from divisions, dividing to the nearest integer instead of rounding down, etc. This not only simplifies
242 * the code drastically, making it more approachable, but it also means that, in practice, the performance impact is
243 * negligible. Its a very simple brute force solution, but simple does not mean bad.
244 *
245 * @see [Fixed Point Arithmetic](https://en.wikipedia.org/wiki/Fixed-point_arithmetic)
246 *
247 * ## Scheduling
248 *
249 * With the central concepts introduced, we can now describe how the scheduler works. As mentioned, the goal is to
250 * always run the eligible thread with the earliest virtual deadline. To achieve this, each scheduler maintains a
251 * runqueue in the form of a Red-Black tree sorted by each thread's virtual deadline.
252 *
253 * To select the next thread to run, we find the first eligible thread in the runqueue and switch to it. If no eligible
254 * thread is found (which means the runqueue is empty), we switch to the idle thread. This process is optimized by
255 * storing the minimum eligible time of each subtree in each node of the runqueue, allowing us to skip entire subtrees
256 * that do not contain any eligible threads.
257 *
258 * ## Preemption
259 *
260 * If, at any point in time, a thread with an earlier virtual deadline becomes available to run (for example, when a
261 * thread is unblocked), the scheduler will preempt the currently running thread and switch to the newly available
262 * thread.
263 *
264 * ## Idle Thread
265 *
266 * The idle thread is a special thread that is not considered active (not stored in the runqueue) and simply runs an
267 * infinite loop that halts the CPU while waiting for an interrupt signaling that a non-idle thread is available to run.
268 * Each CPU has its own idle thread.
269 *
270 * ## Load Balancing
271 *
272 * Each CPU has its own scheduler and associated runqueue, as such we need to balance the load between each CPU, ideally
273 * without causing too many cache misses. Meaning we want to keep threads which have recently run on a CPU on the same
274 * CPU when possible. As such, we define a thread to be "cache-cold" on a CPU if the time since it last ran on that CPU
275 * is greater than `CONFIG_CACHE_HOT_THRESHOLD`, otherwise its considered "cache-hot".
276 *
277 * We use two mechanisms to balance the load between CPUs, one push mechanism and one pull mechanism.
278 *
279 * The push mechanism, also called work stealing, is used when a thread is submitted to the scheduler, as in it was
280 * created or unblocked. In this case, if the thread is cache-cold then the thread will be added to the runqueue of the
281 * CPU with the lowest weight. Otherwise, it will be added to the runqueue of the CPU it last ran on.
282 *
283 * The pull mechanism is used when a CPU is about to become idle. The CPU will find the CPU with the highest weight and
284 * steal the first cache-cold thread from its runqueue. If no cache-cold threads are found, it will simply run the idle
285 * thread.
286 *
287 * @note The reason we want to avoid a global runqueue is to avoid lock contention. Even a small amount of lock
288 * contention in the scheduler will quickly degrade performance, as such it is only allowed to lock a single CPU's
289 * scheduler at a time. This does cause race conditions while pulling or pushing threads, but the worst case scenario is
290 * imperfect load balancing, which is acceptable.
291 *
292 * ## Testing
293 *
294 * The scheduler is tested using a combination of asserts and tests that are enabled in debug builds (`NDEBUG` not
295 * defined). These tests verify that the runqueue is sorted, that the lag does sum to zero (within a margin from
296 * rounding errors), and other invariants of the scheduler.
297 *
298 * ## References
299 *
300 * References were accessed on 2025-12-02.
301 *
302 * [Ion Stoica, Hussein Abdel-Wahab, "Earliest Eligible Virtual Deadline First", Old Dominion University,
303 * 1996.](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=805acf7726282721504c8f00575d91ebfd750564)
304 *
305 * [Jonathan Corbet, "An EEVDF CPU scheduler for Linux", LWN.net, March 9, 2023.](https://lwn.net/Articles/925371/)
306 *
307 * [Jonathan Corbet, "Completing the EEVDF Scheduler", LWN.net, April 11, 2024.](https://lwn.net/Articles/969062/)
308 *
309 * @{
310 */
311
312/**
313 * @brief The per CPU scheduler.
314 */
316
317/**
318 * @brief Virtual clock type.
319 * @typedef vclock_t
320 */
322
323/**
324 * @brief Lag type.
325 * @struct lag_t
326 */
328
329/**
330 * @brief The bits used for the fractional part of a virtual clock or lag value.
331 *
332 * One sign bit, 64 integer bits, 63 fractional bits.
333 */
334#define SCHED_FIXED_POINT (63LL)
335
336/**
337 * @brief Fixed-point zero.
338 */
339#define SCHED_FIXED_ZERO ((int128_t)0)
340
341/**
342 * @brief The minimum difference between two virtual clock or lag values to consider then unequal.
343 */
344#define SCHED_EPSILON (10LL)
345
346/**
347 * @brief Convert a regular integer to fixed-point representation.
348 */
349#define SCHED_FIXED_TO(x) (((int128_t)(x)) << SCHED_FIXED_POINT)
350
351/**
352 * @brief Convert a fixed-point value to a regular integer.
353 */
354#define SCHED_FIXED_FROM(x) ((int64_t)((int128_t)(x) >> SCHED_FIXED_POINT))
355
356/**
357 * @brief The maximum weight a thread can have.
358 */
359#define SCHED_WEIGHT_MAX (PRIORITY_MAX + SCHED_WEIGHT_BASE)
360
361/**
362 * @brief Base weight added to all threads.
363 *
364 * Used to prevent division by zero.
365 */
366#define SCHED_WEIGHT_BASE 1
367
368/**
369 * @brief Per-thread scheduler context.
370 * @struct sched_client_t
371 *
372 * Stored in a thread's `sched` member.
373 */
374typedef struct sched_client
375{
376 rbnode_t node; ///< The node in the scheduler's runqueue.
377 int64_t weight; ///< The weight of the thread.
378 /**
379 * The earliest virtual time at which the thread ought to have received its due share of CPU time.
380 */
382 vclock_t veligible; ///< The virtual time at which the thread becomes eligible to run (lag >= 0).
383 vclock_t vminEligible; ///< The minimum virtual eligible time of the subtree in the runqueue.
384 clock_t stop; ///< The real time when the thread previously stopped executing.
385 cpu_t* lastCpu; ///< The last CPU the thread was scheduled on, it stoped running at `stop` time.
387
388/**
389 * @brief Per-CPU scheduler.
390 * @struct sched_t
391 *
392 * Stored in a CPU's `sched` member.
393 *
394 * @note The `runThread` and `idleThread` members are declared `volatile` as they can be accessed in interrupt
395 * context as well as non-interrupt context.
396 */
397typedef struct sched
398{
399 _Atomic(int64_t) totalWeight; ///< The total weight of all threads in the runqueue, not protected by the lock.
400 rbtree_t runqueue; ///< Contains all runnable threads, including the currently running thread, sorted by vdeadline.
401 vclock_t vtime; ///< The current virtual time of the CPU.
402 clock_t lastUpdate; ///< The real time when the last vtime update occurred.
403 lock_t lock; ///< The lock protecting the scheduler.
404 _Atomic(uint64_t) preemptCount; ///< If greater than zero, preemption is disabled.
405 thread_t* volatile idleThread; ///< The idle thread for this CPU.
406 thread_t* volatile runThread; ///< The currently running thread on this CPU.
407} sched_t;
408
409/**
410 * @brief Initialize the scheduler context for a thread.
411 *
412 * @param client The scheduler context to initialize.
413 */
415
416/**
417 * @brief Starts the scheduler by jumping to the boot thread.
418 *
419 * Will never return.
420 *
421 * @param bootThread The initial thread to switch to.
422 */
423_NORETURN void sched_start(thread_t* bootThread);
424
425/**
426 * @brief Submits a thread to the scheduler.
427 *
428 * If the thread has previously ran within `CONFIG_CACHE_HOT_THRESHOLD` nanoseconds, it will be submitted to the same
429 * CPU it last ran on, otherwise it will be submitted to the least loaded CPU.
430 *
431 * @param thread The thread to submit.
432 */
433void sched_submit(thread_t* thread);
434
435/**
436 * @brief Perform a scheduling operation.
437 *
438 * This function is called on every interrupt to provide a scheduling opportunity.
439 *
440 * Will report a quiescent state to RCU if the CPU is idle or if a context switch occurs.
441 *
442 * @param frame The interrupt frame.
443 */
444void sched_do(interrupt_frame_t* frame);
445
446/**
447 * @brief Checks if the CPU is currently idle.
448 *
449 * @param cpu The CPU to check.
450 * @return `true` if the CPU is idle, `false` otherwise.
451 */
452bool sched_is_idle(cpu_t* cpu);
453
454/**
455 * @brief Sleeps the current thread for a specified duration in nanoseconds.
456 *
457 * @param timeout The duration to sleep in nanoseconds.
458 * @return On success, `0`. On failure, `ERR` and `errno` is set.
459 */
461
462/**
463 * @brief Yield the current thread's time slice to allow other threads to run.
464 *
465 * @note Currently just sleeps for 1ms as we cant really yield without weird lag math.
466 *
467 */
468void sched_yield(void);
469
470/**
471 * @brief Disables preemption on the current CPU.
472 */
473void sched_disable(void);
474
475/**
476 * @brief Enables preemption on the current CPU.
477 */
478void sched_enable(void);
479
480/**
481 * @brief Disable preemption for the duration of the current scope.
482 */
483#define SCHED_SCOPE() \
484 sched_disable(); \
485 __attribute__((cleanup(sched_scope_cleanup))) int* CONCAT(i, __COUNTER__) = 0
486
487/**
488 * @brief Terminates the currently executing process and all it's threads.
489 *
490 * @note Will never return, instead it triggers an interrupt that kills the current thread.
491 *
492 * @param status The exit status of the process.
493 */
494_NORETURN void sched_exits(const char* status);
495
496/**
497 * @brief Terminates the currently executing thread.
498 *
499 * @note Will never return, instead it triggers an interrupt that kills the thread.
500 */
502
503/**
504 * @brief The idle loop for the scheduler.
505 *
506 * This is where idle threads will run when there is nothing else to do.
507 */
508_NORETURN extern void sched_idle_loop(void);
509
510static inline void sched_scope_cleanup(int* _)
511{
512 sched_enable();
513}
514
515/** @} */
#define _NORETURN
Definition config.h:28
#define PERCPU
Attribute specifying that the variable is an offset into the GS segment register.
Definition percpu.h:75
void sched_disable(void)
Disables preemption on the current CPU.
Definition sched.c:638
void sched_client_init(sched_client_t *client)
Initialize the scheduler context for a thread.
Definition sched.c:132
static void sched_scope_cleanup(int *_)
Definition sched.h:510
void sched_enable(void)
Enables preemption on the current CPU.
Definition sched.c:646
sched_t PERCPU _pcpu_sched
The per CPU scheduler.
int128_t vclock_t
Virtual clock type.
Definition sched.h:321
_NORETURN void sched_start(thread_t *bootThread)
Starts the scheduler by jumping to the boot thread.
Definition sched.c:354
void sched_yield(void)
Yield the current thread's time slice to allow other threads to run.
Definition sched.c:633
_NORETURN void sched_exits(const char *status)
Terminates the currently executing process and all it's threads.
Definition sched.c:654
bool sched_is_idle(cpu_t *cpu)
Checks if the CPU is currently idle.
Definition sched.c:618
void sched_submit(thread_t *thread)
Submits a thread to the scheduler.
Definition sched.c:378
_NORETURN void sched_idle_loop(void)
The idle loop for the scheduler.
uint64_t sched_nanosleep(clock_t timeout)
Sleeps the current thread for a specified duration in nanoseconds.
Definition sched.c:628
int128_t lag_t
Definition sched.h:327
void sched_do(interrupt_frame_t *frame)
Perform a scheduling operation.
Definition sched.c:527
_NORETURN void sched_thread_exit(void)
Terminates the currently executing thread.
Definition sched.c:664
__UINT64_TYPE__ clock_t
A nanosecond time.
Definition clock_t.h:13
__UINT64_TYPE__ uint64_t
Definition stdint.h:17
__int128_t int128_t
Definition stdint.h:169
__INT64_TYPE__ int64_t
Definition stdint.h:16
CPU structure.
Definition cpu.h:84
Trap Frame Structure.
Definition interrupt.h:195
A simple ticket lock implementation.
Definition lock.h:44
Process structure.
Definition process.h:76
Red-Black Tree Node.
Definition rbtree.h:93
Red-Black Tree.
Definition rbtree.h:138
Per-thread scheduler context.
Definition sched.h:375
clock_t stop
The real time when the thread previously stopped executing.
Definition sched.h:384
vclock_t vdeadline
Definition sched.h:381
vclock_t veligible
The virtual time at which the thread becomes eligible to run (lag >= 0).
Definition sched.h:382
vclock_t vminEligible
The minimum virtual eligible time of the subtree in the runqueue.
Definition sched.h:383
rbnode_t node
The node in the scheduler's runqueue.
Definition sched.h:376
cpu_t * lastCpu
The last CPU the thread was scheduled on, it stoped running at stop time.
Definition sched.h:385
int64_t weight
The weight of the thread.
Definition sched.h:377
Per-CPU scheduler.
Definition sched.h:398
vclock_t vtime
The current virtual time of the CPU.
Definition sched.h:401
lock_t lock
The lock protecting the scheduler.
Definition sched.h:403
clock_t lastUpdate
The real time when the last vtime update occurred.
Definition sched.h:402
_Atomic(uint64_t) preemptCount
If greater than zero, preemption is disabled.
thread_t *volatile runThread
The currently running thread on this CPU.
Definition sched.h:406
rbtree_t runqueue
Contains all runnable threads, including the currently running thread, sorted by vdeadline.
Definition sched.h:400
_Atomic(int64_t) totalWeight
The total weight of all threads in the runqueue, not protected by the lock.
thread_t *volatile idleThread
The idle thread for this CPU.
Definition sched.h:405
Thread of execution structure.
Definition thread.h:61