PatchworkOS
Loading...
Searching...
No Matches

Virtual Memory Manager (VMM). More...

Data Structures

struct  vmm_shootdown_t
 TLB shootdown structure. More...
 
struct  vmm_cpu_ctx_t
 Per-CPU VMM context. More...
 

Macros

#define VMM_KERNEL_BINARY_MAX   PML_HIGHER_HALF_END
 The maximum address for the content of the kernel binary.
 
#define VMM_KERNEL_BINARY_MIN    PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 1, PML4)
 The minimum address for the content of the kernel binary./*#end#*‍/.
 
#define VMM_KERNEL_STACKS_MAX   VMM_KERNEL_BINARY_MIN
 The maximum address for kernel stacks.
 
#define VMM_KERNEL_STACKS_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 3, PML4)
 The minimum address for kernel stacks.
 
#define VMM_KERNEL_HEAP_MAX   VMM_KERNEL_STACKS_MIN
 The maximum address for the kernel heap.
 
#define VMM_KERNEL_HEAP_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 5, PML4)
 The minimum address for the kernel heap.
 
#define VMM_IDENTITY_MAPPED_MAX   VMM_KERNEL_HEAP_MIN
 The maximum address for the identity mapped physical memory.
 
#define VMM_IDENTITY_MAPPED_MIN   PML_HIGHER_HALF_START
 The minimum address for the identity mapped physical memory.
 
#define VMM_USER_SPACE_MAX   PML_LOWER_HALF_END
 The maximum address for user space.
 
#define VMM_USER_SPACE_MIN   (0x400000)
 The minimum address for user space.
 
#define VMM_IS_PAGE_ALIGNED(addr)   (((uintptr_t)(addr) & (PAGE_SIZE - 1)) == 0)
 Check if an address is page aligned.
 
#define VMM_MAX_SHOOTDOWN_REQUESTS   16
 Maximum number of shootdown requests that can be queued per CPU.
 

Enumerations

enum  vmm_alloc_flags_t {
  VMM_ALLOC_NONE = 0x0 ,
  VMM_ALLOC_FAIL_IF_MAPPED = 0x1
}
 Flags for vmm_alloc(). More...
 

Functions

void vmm_init (const boot_memory_t *memory, const boot_gop_t *gop, const boot_kernel_t *kernel)
 Initializes the Virtual Memory Manager.
 
void vmm_cpu_ctx_init (vmm_cpu_ctx_t *ctx)
 Initializes a per-CPU VMM context and performs per-CPU VMM initialization.
 
void vmm_map_bootloader_lower_half (thread_t *bootThread)
 Maps the lower half of the address space to the boot thread during kernel initialization.
 
void vmm_unmap_bootloader_lower_half (thread_t *bootThread)
 Unmaps the lower half of the address space after kernel initialization.
 
space_tvmm_get_kernel_space (void)
 Retrieves the kernel's address space.
 
pml_flags_t vmm_prot_to_flags (prot_t prot)
 Converts the user space memory protection flags to page table entry flags.
 
void * vmm_alloc (space_t *space, void *virtAddr, uint64_t length, pml_flags_t pmlFlags, vmm_alloc_flags_t allocFlags)
 Allocates and maps virtual memory in a given address space.
 
void * vmm_map (space_t *space, void *virtAddr, void *physAddr, uint64_t length, pml_flags_t flags, space_callback_func_t func, void *private)
 Maps physical memory to virtual memory in a given address space.
 
void * vmm_map_pages (space_t *space, void *virtAddr, void **pages, uint64_t pageAmount, pml_flags_t flags, space_callback_func_t func, void *private)
 Maps an array of physical pages to virtual memory in a given address space.
 
uint64_t vmm_unmap (space_t *space, void *virtAddr, uint64_t length)
 Unmaps virtual memory from a given address space.
 
uint64_t vmm_protect (space_t *space, void *virtAddr, uint64_t length, pml_flags_t flags)
 Changes memory protection flags for a virtual memory region in a given address space.
 
void vmm_shootdown_handler (interrupt_frame_t *frame, cpu_t *self)
 TLB shootdown interrupt handler.
 

Detailed Description

Virtual Memory Manager (VMM).

The Virtual Memory Manager (VMM) is responsible for allocating and mapping virtual memory.

TLB Shootdowns

When we change a mapping in a address space its possible that other CPUs have the same address space loaded and have the old mappings in their "TLB", which is a hardware feature letting the CPUs cache page table entries. This cache must be cleared when we change the mappings of a page table. This is called a TLB shootdown.

Details can be found in vmm_map(), vmm_unmap() and vmm_protect().

Address Space Layout

The address space layout is split into several regions. For convenience, the regions are defined using page table indices, as in the entire virtual address space is divided into 512 regions, each mapped by one entry in the top level page table (PML4) with 256 entries for the lower half and 256 entries for the higher half. By doing this we can very easily copy mappings between address spaces by just copying the relevant PML4 entries.

First, at the very top, we have the kernel binary itself and all its data, code, bss, rodata, etc. This region uses the the last index in the page table. This region will never be fully filled and the kernel itself is not guaranteed to be loaded at the very start of this region, the exact address is decided by the linker.lds script. This section is mapped identically for all processes.

Secondly, we have the per-thread kernel stacks, one stack per thread. Each stack is allocated on demand and can grow dynamically up to CONFIG_MAX_KERNEL_STACK_PAGES pages not including its guard page. This section takes up 2 indices in the page table and will be process-specific as each process has its own threads and thus its own kernel stacks.

Thirdly, we have the kernel heap, which is used for dynamic memory allocation in the kernel. The kernel heap starts at VMM_KERNEL_HEAP_MIN and grows up towards VMM_KERNEL_HEAP_MAX. This section takes up 2 indices in the page table and is mapped identically for all processes.

Fourthly (is fourthly really a word?), we have the identity mapped physical memory. All physical memory will be mapped here by simply taking the original physical address and adding 0xFFFF800000000000 to it. This means that the physical address 0x123456 will be mapped to the virtual address 0xFFFF800000123456. This section takes up all remaining indices below the kernel heap to the end of the higher half and is mapped identically for all processes.

Fithly, we have non-canonical memory, which is impossible to access and will trigger a general protection fault if accessed. This section takes up the gap between the lower half and higher half of the address space.

Finally, we have user space, which starts at 0x400000 (4MiB) and goes up to the top of the lower half. The first 4MiB is left unmapped to catch null pointer dereferences. This section is different for each process.

Macro Definition Documentation

◆ VMM_IDENTITY_MAPPED_MAX

#define VMM_IDENTITY_MAPPED_MAX   VMM_KERNEL_HEAP_MIN

The maximum address for the identity mapped physical memory.

Definition at line 71 of file vmm.h.

◆ VMM_IDENTITY_MAPPED_MIN

#define VMM_IDENTITY_MAPPED_MIN   PML_HIGHER_HALF_START

The minimum address for the identity mapped physical memory.

Definition at line 72 of file vmm.h.

◆ VMM_IS_PAGE_ALIGNED

#define VMM_IS_PAGE_ALIGNED (   addr)    (((uintptr_t)(addr) & (PAGE_SIZE - 1)) == 0)

Check if an address is page aligned.

Parameters
addrThe address to check.
Returns
true if the address is page aligned, false otherwise.

Definition at line 83 of file vmm.h.

◆ VMM_KERNEL_BINARY_MAX

#define VMM_KERNEL_BINARY_MAX   PML_HIGHER_HALF_END

The maximum address for the content of the kernel binary.

Definition at line 61 of file vmm.h.

◆ VMM_KERNEL_BINARY_MIN

#define VMM_KERNEL_BINARY_MIN    PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 1, PML4)

The minimum address for the content of the kernel binary./*#end#*‍/.

Definition at line 63 of file vmm.h.

◆ VMM_KERNEL_HEAP_MAX

#define VMM_KERNEL_HEAP_MAX   VMM_KERNEL_STACKS_MIN

The maximum address for the kernel heap.

Definition at line 68 of file vmm.h.

◆ VMM_KERNEL_HEAP_MIN

#define VMM_KERNEL_HEAP_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 5, PML4)

The minimum address for the kernel heap.

Definition at line 69 of file vmm.h.

◆ VMM_KERNEL_STACKS_MAX

#define VMM_KERNEL_STACKS_MAX   VMM_KERNEL_BINARY_MIN

The maximum address for kernel stacks.

Definition at line 65 of file vmm.h.

◆ VMM_KERNEL_STACKS_MIN

#define VMM_KERNEL_STACKS_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 3, PML4)

The minimum address for kernel stacks.

Definition at line 66 of file vmm.h.

◆ VMM_MAX_SHOOTDOWN_REQUESTS

#define VMM_MAX_SHOOTDOWN_REQUESTS   16

Maximum number of shootdown requests that can be queued per CPU.

Definition at line 102 of file vmm.h.

◆ VMM_USER_SPACE_MAX

#define VMM_USER_SPACE_MAX   PML_LOWER_HALF_END

The maximum address for user space.

Definition at line 74 of file vmm.h.

◆ VMM_USER_SPACE_MIN

#define VMM_USER_SPACE_MIN   (0x400000)

The minimum address for user space.

Definition at line 75 of file vmm.h.

Enumeration Type Documentation

◆ vmm_alloc_flags_t

Flags for vmm_alloc().

Enumerator
VMM_ALLOC_NONE 
VMM_ALLOC_FAIL_IF_MAPPED 

If set and any page is already mapped, fail and set errno to EEXIST.

Definition at line 121 of file vmm.h.

Function Documentation

◆ vmm_alloc()

void * vmm_alloc ( space_t space,
void *  virtAddr,
uint64_t  length,
pml_flags_t  pmlFlags,
vmm_alloc_flags_t  allocFlags 
)

Allocates and maps virtual memory in a given address space.

The allocated memory will be backed by newly allocated physical memory pages and is not guaranteed to be zeroed.

Will overwrite any existing mappings in the specified range if VMM_ALLOC_FAIL_IF_MAPPED is not set.

See also
vmm_map() for details on TLB shootdowns.
Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe desired virtual address. If NULL, the kernel chooses an available address.
lengthThe length of the virtual memory region to allocate, in bytes.
flagsThe page table flags for the mapping, will always include PML_OWNED, must have PML_PRESENT set.
Returns
On success, the virtual address. On failure, returns NULL and errno is set.

Definition at line 168 of file vmm.c.

References EBUSY, EEXIST, EINVAL, ENOMEM, EOK, ERR, errno, space_mapping_t::flags, MIN, NULL, PAGE_SIZE, page_table_is_pinned(), page_table_is_unmapped(), page_table_map_pages(), space_mapping_t::pageAmount, space_t::pageTable, PML_CALLBACK_NONE, PML_OWNED, PML_PRESENT, pmm_alloc_pages(), space_mapping_end(), space_mapping_start(), space_mapping_t::virtAddr, VMM_ALLOC_FAIL_IF_MAPPED, vmm_get_kernel_space(), and vmm_page_table_unmap_with_shootdown().

Referenced by const_one_mmap(), const_zero_mmap(), loader_load_program(), smp_others_init(), and thread_handle_page_fault().

◆ vmm_cpu_ctx_init()

void vmm_cpu_ctx_init ( vmm_cpu_ctx_t ctx)

Initializes a per-CPU VMM context and performs per-CPU VMM initialization.

Must be called on the CPU that owns the context.

Parameters
ctxThe CPU VMM context to initialize.

Definition at line 110 of file vmm.c.

References CPU_ID_BOOTSTRAP, cpu_t::id, smp_self_unsafe(), and vmm_cpu_ctx_init_common().

Referenced by cpu_init().

◆ vmm_get_kernel_space()

space_t * vmm_get_kernel_space ( void  )

Retrieves the kernel's address space.

Returns
Pointer to the kernel's address space.

Definition at line 138 of file vmm.c.

References kernelSpace.

Referenced by space_map_kernel_space_region(), trampoline_init(), vmm_alloc(), vmm_map(), vmm_map_pages(), vmm_protect(), and vmm_unmap().

◆ vmm_init()

◆ vmm_map()

void * vmm_map ( space_t space,
void *  virtAddr,
void *  physAddr,
uint64_t  length,
pml_flags_t  flags,
space_callback_func_t  func,
void *  private 
)

Maps physical memory to virtual memory in a given address space.

Will overwrite any existing mappings in the specified range.

When mapping a page there is no need for a TLN shootdown as any previous access to that page will cause a non-present page fault. However if the page is already mapped then it must first be unmapped as described in vmm_unmap().

Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe desired virtual address to map to, if NULL, the kernel chooses an available address.
physAddrThe physical address to map from. Must not be NULL.
lengthThe length of the memory region to map, in bytes.
flagsThe page table flags for the mapping, must have PML_PRESENT set.
funcThe callback function to call when the mapped memory is unmapped or the address space is freed. If NULL, then no callback will be called.
privatePrivate data to pass to the callback function.
Returns
On success, the virtual address. On failure, returns NULL and errno is set.

Definition at line 231 of file vmm.c.

References EBUSY, EINVAL, ENOMEM, ENOSPC, EOK, ERR, errno, NULL, page_table_is_pinned(), page_table_is_unmapped(), page_table_map(), space_mapping_t::pageAmount, space_t::pageTable, space_mapping_t::physAddr, PML_CALLBACK_NONE, PML_MAX_CALLBACK, PML_PRESENT, space_alloc_callback(), space_free_callback(), space_mapping_end(), space_mapping_start(), space_mapping_t::virtAddr, vmm_get_kernel_space(), and vmm_page_table_unmap_with_shootdown().

Referenced by aml_ensure_mem_is_mapped(), gop_mmap(), hpet_init(), ioapic_all_init(), lapic_init(), and pci_config_init().

◆ vmm_map_bootloader_lower_half()

void vmm_map_bootloader_lower_half ( thread_t bootThread)

Maps the lower half of the address space to the boot thread during kernel initialization.

We still need to access bootloader data like the memory map while the kernel is initializing, so we keep the lower half mapped until the kernel is fully initialized. After that we can unmap the lower half both from kernel space and the boot thread's address space.

The bootloades lower half mappings will be transfered to the kernel space mappings during boot so we just copy them from there.

Parameters
bootThreadThe boot thread, which will have its address space modified.

Definition at line 121 of file vmm.c.

References bootThread, pml_t::entries, kernelSpace, space_t::pageTable, page_table_t::pml4, PML_INDEX_LOWER_HALF_MAX, PML_INDEX_LOWER_HALF_MIN, thread_t::process, and process_t::space.

Referenced by init_finalize().

◆ vmm_map_pages()

void * vmm_map_pages ( space_t space,
void *  virtAddr,
void **  pages,
uint64_t  pageAmount,
pml_flags_t  flags,
space_callback_func_t  func,
void *  private 
)

Maps an array of physical pages to virtual memory in a given address space.

Will overwrite any existing mappings in the specified range.

See also
vmm_map() for details on TLB shootdowns.
Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe desired virtual address to map to, if NULL, the kernel chooses an available address.
pagesAn array of physical page addresses to map.
pageAmountThe number of physical pages in the pages array, must not be zero.
flagsThe page table flags for the mapping, must have PML_PRESENT set.
funcThe callback function to call when the mapped memory is unmapped or the address space is freed. If NULL, then no callback will be called.
privatePrivate data to pass to the callback function.
Returns
On success, the virtual address. On failure, returns NULL and errno is set.

Definition at line 285 of file vmm.c.

References EBUSY, EINVAL, ENOMEM, ENOSPC, EOK, ERR, errno, space_mapping_t::flags, NULL, PAGE_SIZE, page_table_is_pinned(), page_table_is_unmapped(), page_table_map_pages(), space_mapping_t::pageAmount, pageAmount, space_t::pageTable, PML_CALLBACK_NONE, PML_MAX_CALLBACK, PML_PRESENT, space_alloc_callback(), space_free_callback(), space_mapping_end(), space_mapping_start(), space_mapping_t::virtAddr, vmm_get_kernel_space(), and vmm_page_table_unmap_with_shootdown().

Referenced by shmem_mmap(), and shmem_object_allocate_pages().

◆ vmm_prot_to_flags()

pml_flags_t vmm_prot_to_flags ( prot_t  prot)

Converts the user space memory protection flags to page table entry flags.

Parameters
protThe memory protection flags.
Returns
The corresponding page table entry flags.

Definition at line 143 of file vmm.c.

References PML_PRESENT, PML_WRITE, PROT_NONE, PROT_READ, and PROT_WRITE.

Referenced by SYSCALL_DEFINE(), and SYSCALL_DEFINE().

◆ vmm_protect()

uint64_t vmm_protect ( space_t space,
void *  virtAddr,
uint64_t  length,
pml_flags_t  flags 
)

Changes memory protection flags for a virtual memory region in a given address space.

The memory region must be fully mapped, otherwise this function will fail.

When changing memory protection flags, there is a need for TLB shootdowns on all CPUs that have the address space loaded. To perform the shootdown we first update the page entries for the region, perform the shootdown, and wait for acknowledgements from all CPUs and finally return.

Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe virtual address of the memory region.
lengthThe length of the memory region, in bytes.
flagsThe new page table flags for the memory region, if PML_PRESENT is not set, the memory will be unmapped.
Returns
On success, 0. On failure, ERR and errno is set.

Definition at line 401 of file vmm.c.

References EBUSY, EINVAL, ENOENT, EOK, ERR, errno, space_mapping_t::flags, NULL, page_table_is_pinned(), page_table_is_unmapped(), page_table_set_flags(), space_mapping_t::pageAmount, space_t::pageTable, PML_PRESENT, space_mapping_end(), space_mapping_start(), space_tlb_shootdown(), space_mapping_t::virtAddr, vmm_get_kernel_space(), and vmm_unmap().

Referenced by loader_load_program(), and SYSCALL_DEFINE().

◆ vmm_shootdown_handler()

void vmm_shootdown_handler ( interrupt_frame_t frame,
cpu_t self 
)

◆ vmm_unmap()

uint64_t vmm_unmap ( space_t space,
void *  virtAddr,
uint64_t  length 
)

Unmaps virtual memory from a given address space.

If the memory is already unmapped, this function will do nothing.

When unmapping memory, there is a need for TLB shootdowns on all CPUs that have the address space loaded. To perform the shootdown we first set all page entries for the region to be non-present, perform the shootdown, wait for acknowledgements from all CPUs, and finally free any underlying physical memory if the PML_OWNED flag is set.

Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe virtual address of the memory region.
lengthThe length of the memory region, in bytes.
Returns
On success, 0. On failure, ERR and errno is set.

Definition at line 339 of file vmm.c.

References assert, BITMAP_FOR_EACH_SET, space_t::callbackBitmap, space_t::callbacks, EBUSY, EINVAL, EOK, ERR, errno, space_callback_t::func, NULL, page_table_collect_callbacks(), page_table_is_pinned(), space_callback_t::pageAmount, space_mapping_t::pageAmount, space_t::pageTable, PML_MAX_CALLBACK, PML_NONE, space_callback_t::private, space_free_callback(), space_mapping_end(), space_mapping_start(), space_mapping_t::virtAddr, vmm_get_kernel_space(), and vmm_page_table_unmap_with_shootdown().

Referenced by stack_pointer_deinit(), SYSCALL_DEFINE(), and vmm_protect().

◆ vmm_unmap_bootloader_lower_half()

void vmm_unmap_bootloader_lower_half ( thread_t bootThread)

Unmaps the lower half of the address space after kernel initialization.

Will unmap the lower half from both the kernel space and the boot thread's address space.

After this is called the bootloaders lower half mappings will be destroyed and the kernel will only have its own mappings.

Parameters
bootThreadThe boot thread, which will have its address space modified.

Definition at line 129 of file vmm.c.

References bootThread, pml_t::entries, kernelSpace, space_t::pageTable, page_table_t::pml4, PML_INDEX_LOWER_HALF_MAX, PML_INDEX_LOWER_HALF_MIN, thread_t::process, pml_entry_t::raw, and process_t::space.

Referenced by init_finalize().