|
PatchworkOS
|
Virtual Memory Manager (VMM). More...
Data Structures | |
| struct | vmm_shootdown_t |
| TLB shootdown structure. More... | |
| struct | vmm_cpu_ctx_t |
| Per-CPU VMM context. More... | |
Macros | |
| #define | VMM_KERNEL_BINARY_MAX PML_HIGHER_HALF_END |
| The maximum address for the content of the kernel binary. | |
| #define | VMM_KERNEL_BINARY_MIN PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 1, PML4) |
| The minimum address for the content of the kernel binary./*#end#*/. | |
| #define | VMM_KERNEL_STACKS_MAX VMM_KERNEL_BINARY_MIN |
| The maximum address for kernel stacks. | |
| #define | VMM_KERNEL_STACKS_MIN PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 3, PML4) |
| The minimum address for kernel stacks. | |
| #define | VMM_KERNEL_HEAP_MAX VMM_KERNEL_STACKS_MIN |
| The maximum address for the kernel heap. | |
| #define | VMM_KERNEL_HEAP_MIN PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 5, PML4) |
| The minimum address for the kernel heap. | |
| #define | VMM_IDENTITY_MAPPED_MAX VMM_KERNEL_HEAP_MIN |
| The maximum address for the identity mapped physical memory. | |
| #define | VMM_IDENTITY_MAPPED_MIN PML_HIGHER_HALF_START |
| The minimum address for the identity mapped physical memory. | |
| #define | VMM_USER_SPACE_MAX PML_LOWER_HALF_END |
| The maximum address for user space. | |
| #define | VMM_USER_SPACE_MIN (0x400000) |
| The minimum address for user space. | |
| #define | VMM_IS_PAGE_ALIGNED(addr) (((uintptr_t)(addr) & (PAGE_SIZE - 1)) == 0) |
| Check if an address is page aligned. | |
| #define | VMM_MAX_SHOOTDOWN_REQUESTS 16 |
| Maximum number of shootdown requests that can be queued per CPU. | |
Enumerations | |
| enum | vmm_alloc_flags_t { VMM_ALLOC_NONE = 0x0 , VMM_ALLOC_FAIL_IF_MAPPED = 0x1 } |
Flags for vmm_alloc(). More... | |
Functions | |
| void | vmm_init (const boot_memory_t *memory, const boot_gop_t *gop, const boot_kernel_t *kernel) |
| Initializes the Virtual Memory Manager. | |
| void | vmm_cpu_ctx_init (vmm_cpu_ctx_t *ctx) |
| Initializes a per-CPU VMM context and performs per-CPU VMM initialization. | |
| void | vmm_map_bootloader_lower_half (thread_t *bootThread) |
| Maps the lower half of the address space to the boot thread during kernel initialization. | |
| void | vmm_unmap_bootloader_lower_half (thread_t *bootThread) |
| Unmaps the lower half of the address space after kernel initialization. | |
| space_t * | vmm_get_kernel_space (void) |
| Retrieves the kernel's address space. | |
| pml_flags_t | vmm_prot_to_flags (prot_t prot) |
| Converts the user space memory protection flags to page table entry flags. | |
| void * | vmm_alloc (space_t *space, void *virtAddr, uint64_t length, pml_flags_t pmlFlags, vmm_alloc_flags_t allocFlags) |
| Allocates and maps virtual memory in a given address space. | |
| void * | vmm_map (space_t *space, void *virtAddr, void *physAddr, uint64_t length, pml_flags_t flags, space_callback_func_t func, void *private) |
| Maps physical memory to virtual memory in a given address space. | |
| void * | vmm_map_pages (space_t *space, void *virtAddr, void **pages, uint64_t pageAmount, pml_flags_t flags, space_callback_func_t func, void *private) |
| Maps an array of physical pages to virtual memory in a given address space. | |
| uint64_t | vmm_unmap (space_t *space, void *virtAddr, uint64_t length) |
| Unmaps virtual memory from a given address space. | |
| uint64_t | vmm_protect (space_t *space, void *virtAddr, uint64_t length, pml_flags_t flags) |
| Changes memory protection flags for a virtual memory region in a given address space. | |
| void | vmm_shootdown_handler (interrupt_frame_t *frame, cpu_t *self) |
| TLB shootdown interrupt handler. | |
Virtual Memory Manager (VMM).
The Virtual Memory Manager (VMM) is responsible for allocating and mapping virtual memory.
When we change a mapping in a address space its possible that other CPUs have the same address space loaded and have the old mappings in their "TLB", which is a hardware feature letting the CPUs cache page table entries. This cache must be cleared when we change the mappings of a page table. This is called a TLB shootdown.
Details can be found in vmm_map(), vmm_unmap() and vmm_protect().
The address space layout is split into several regions. For convenience, the regions are defined using page table indices, as in the entire virtual address space is divided into 512 regions, each mapped by one entry in the top level page table (PML4) with 256 entries for the lower half and 256 entries for the higher half. By doing this we can very easily copy mappings between address spaces by just copying the relevant PML4 entries.
First, at the very top, we have the kernel binary itself and all its data, code, bss, rodata, etc. This region uses the the last index in the page table. This region will never be fully filled and the kernel itself is not guaranteed to be loaded at the very start of this region, the exact address is decided by the linker.lds script. This section is mapped identically for all processes.
Secondly, we have the per-thread kernel stacks, one stack per thread. Each stack is allocated on demand and can grow dynamically up to CONFIG_MAX_KERNEL_STACK_PAGES pages not including its guard page. This section takes up 2 indices in the page table and will be process-specific as each process has its own threads and thus its own kernel stacks.
Thirdly, we have the kernel heap, which is used for dynamic memory allocation in the kernel. The kernel heap starts at VMM_KERNEL_HEAP_MIN and grows up towards VMM_KERNEL_HEAP_MAX. This section takes up 2 indices in the page table and is mapped identically for all processes.
Fourthly (is fourthly really a word?), we have the identity mapped physical memory. All physical memory will be mapped here by simply taking the original physical address and adding 0xFFFF800000000000 to it. This means that the physical address 0x123456 will be mapped to the virtual address 0xFFFF800000123456. This section takes up all remaining indices below the kernel heap to the end of the higher half and is mapped identically for all processes.
Fithly, we have non-canonical memory, which is impossible to access and will trigger a general protection fault if accessed. This section takes up the gap between the lower half and higher half of the address space.
Finally, we have user space, which starts at 0x400000 (4MiB) and goes up to the top of the lower half. The first 4MiB is left unmapped to catch null pointer dereferences. This section is different for each process.
| #define VMM_IDENTITY_MAPPED_MAX VMM_KERNEL_HEAP_MIN |
| #define VMM_IDENTITY_MAPPED_MIN PML_HIGHER_HALF_START |
| #define VMM_KERNEL_BINARY_MAX PML_HIGHER_HALF_END |
| #define VMM_KERNEL_BINARY_MIN PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 1, PML4) |
| #define VMM_KERNEL_HEAP_MAX VMM_KERNEL_STACKS_MIN |
| #define VMM_KERNEL_HEAP_MIN PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 5, PML4) |
| #define VMM_KERNEL_STACKS_MAX VMM_KERNEL_BINARY_MIN |
| #define VMM_KERNEL_STACKS_MIN PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 3, PML4) |
| #define VMM_MAX_SHOOTDOWN_REQUESTS 16 |
| #define VMM_USER_SPACE_MAX PML_LOWER_HALF_END |
| #define VMM_USER_SPACE_MIN (0x400000) |
| enum vmm_alloc_flags_t |
Flags for vmm_alloc().
| Enumerator | |
|---|---|
| VMM_ALLOC_NONE | |
| VMM_ALLOC_FAIL_IF_MAPPED | If set and any page is already mapped, fail and set |
| void * vmm_alloc | ( | space_t * | space, |
| void * | virtAddr, | ||
| uint64_t | length, | ||
| pml_flags_t | pmlFlags, | ||
| vmm_alloc_flags_t | allocFlags | ||
| ) |
Allocates and maps virtual memory in a given address space.
The allocated memory will be backed by newly allocated physical memory pages and is not guaranteed to be zeroed.
Will overwrite any existing mappings in the specified range if VMM_ALLOC_FAIL_IF_MAPPED is not set.
vmm_map() for details on TLB shootdowns.| space | The target address space, if NULL, the kernel space is used. |
| virtAddr | The desired virtual address. If NULL, the kernel chooses an available address. |
| length | The length of the virtual memory region to allocate, in bytes. |
| flags | The page table flags for the mapping, will always include PML_OWNED, must have PML_PRESENT set. |
NULL and errno is set. Definition at line 168 of file vmm.c.
References EBUSY, EEXIST, EINVAL, ENOMEM, EOK, ERR, errno, space_mapping_t::flags, MIN, NULL, PAGE_SIZE, page_table_is_pinned(), page_table_is_unmapped(), page_table_map_pages(), space_mapping_t::pageAmount, space_t::pageTable, PML_CALLBACK_NONE, PML_OWNED, PML_PRESENT, pmm_alloc_pages(), space_mapping_end(), space_mapping_start(), space_mapping_t::virtAddr, VMM_ALLOC_FAIL_IF_MAPPED, vmm_get_kernel_space(), and vmm_page_table_unmap_with_shootdown().
Referenced by const_one_mmap(), const_zero_mmap(), loader_load_program(), smp_others_init(), and thread_handle_page_fault().
| void vmm_cpu_ctx_init | ( | vmm_cpu_ctx_t * | ctx | ) |
Initializes a per-CPU VMM context and performs per-CPU VMM initialization.
Must be called on the CPU that owns the context.
| ctx | The CPU VMM context to initialize. |
Definition at line 110 of file vmm.c.
References CPU_ID_BOOTSTRAP, cpu_t::id, smp_self_unsafe(), and vmm_cpu_ctx_init_common().
Referenced by cpu_init().
| space_t * vmm_get_kernel_space | ( | void | ) |
Retrieves the kernel's address space.
Definition at line 138 of file vmm.c.
References kernelSpace.
Referenced by space_map_kernel_space_region(), trampoline_init(), vmm_alloc(), vmm_map(), vmm_map_pages(), vmm_protect(), and vmm_unmap().
| void vmm_init | ( | const boot_memory_t * | memory, |
| const boot_gop_t * | gop, | ||
| const boot_kernel_t * | kernel | ||
| ) |
Initializes the Virtual Memory Manager.
| memory | The memory information provided by the bootloader. |
| gop | The graphics output protocol provided by the bootloader. |
| kernel | The structure provided by the bootloader specifying for example the addresses of the kernel. |
Definition at line 41 of file vmm.c.
References assert, BOOT_MEMORY_MAP_GET_DESCRIPTOR, BYTES_TO_PAGES, CPU_ID_BOOTSTRAP, cr3_write(), pml_t::entries, ERR, gop, cpu_t::id, kernelSpace, boot_memory_map_t::length, LOG_DEBUG, LOG_INFO, boot_memory_t::map, NULL, PAGE_SIZE, page_table_map(), space_t::pageTable, panic(), boot_gop_t::physAddr, boot_kernel_t::physStart, page_table_t::pml4, PML_CALLBACK_NONE, PML_ENSURE_LOWER_HALF, PML_GLOBAL, PML_HIGHER_HALF_START, PML_INDEX_LOWER_HALF_MAX, PML_INDEX_LOWER_HALF_MIN, PML_LOWER_HALF_END, PML_PRESENT, PML_WRITE, boot_gop_t::size, boot_kernel_t::size, smp_self_unsafe(), space_init(), SPACE_USE_PMM_BITMAP, boot_memory_t::table, boot_gop_t::virtAddr, boot_kernel_t::virtStart, cpu_t::vmm, vmm_cpu_ctx_init_common(), VMM_IDENTITY_MAPPED_MAX, VMM_IDENTITY_MAPPED_MIN, VMM_KERNEL_BINARY_MAX, VMM_KERNEL_BINARY_MIN, VMM_KERNEL_HEAP_MAX, VMM_KERNEL_HEAP_MIN, VMM_KERNEL_STACKS_MAX, VMM_KERNEL_STACKS_MIN, VMM_USER_SPACE_MAX, and VMM_USER_SPACE_MIN.
Referenced by init_early().
| void * vmm_map | ( | space_t * | space, |
| void * | virtAddr, | ||
| void * | physAddr, | ||
| uint64_t | length, | ||
| pml_flags_t | flags, | ||
| space_callback_func_t | func, | ||
| void * | private | ||
| ) |
Maps physical memory to virtual memory in a given address space.
Will overwrite any existing mappings in the specified range.
When mapping a page there is no need for a TLN shootdown as any previous access to that page will cause a non-present page fault. However if the page is already mapped then it must first be unmapped as described in vmm_unmap().
| space | The target address space, if NULL, the kernel space is used. |
| virtAddr | The desired virtual address to map to, if NULL, the kernel chooses an available address. |
| physAddr | The physical address to map from. Must not be NULL. |
| length | The length of the memory region to map, in bytes. |
| flags | The page table flags for the mapping, must have PML_PRESENT set. |
| func | The callback function to call when the mapped memory is unmapped or the address space is freed. If NULL, then no callback will be called. |
| private | Private data to pass to the callback function. |
NULL and errno is set. Definition at line 231 of file vmm.c.
References EBUSY, EINVAL, ENOMEM, ENOSPC, EOK, ERR, errno, NULL, page_table_is_pinned(), page_table_is_unmapped(), page_table_map(), space_mapping_t::pageAmount, space_t::pageTable, space_mapping_t::physAddr, PML_CALLBACK_NONE, PML_MAX_CALLBACK, PML_PRESENT, space_alloc_callback(), space_free_callback(), space_mapping_end(), space_mapping_start(), space_mapping_t::virtAddr, vmm_get_kernel_space(), and vmm_page_table_unmap_with_shootdown().
Referenced by aml_ensure_mem_is_mapped(), gop_mmap(), hpet_init(), ioapic_all_init(), lapic_init(), and pci_config_init().
| void vmm_map_bootloader_lower_half | ( | thread_t * | bootThread | ) |
Maps the lower half of the address space to the boot thread during kernel initialization.
We still need to access bootloader data like the memory map while the kernel is initializing, so we keep the lower half mapped until the kernel is fully initialized. After that we can unmap the lower half both from kernel space and the boot thread's address space.
The bootloades lower half mappings will be transfered to the kernel space mappings during boot so we just copy them from there.
| bootThread | The boot thread, which will have its address space modified. |
Definition at line 121 of file vmm.c.
References bootThread, pml_t::entries, kernelSpace, space_t::pageTable, page_table_t::pml4, PML_INDEX_LOWER_HALF_MAX, PML_INDEX_LOWER_HALF_MIN, thread_t::process, and process_t::space.
Referenced by init_finalize().
| void * vmm_map_pages | ( | space_t * | space, |
| void * | virtAddr, | ||
| void ** | pages, | ||
| uint64_t | pageAmount, | ||
| pml_flags_t | flags, | ||
| space_callback_func_t | func, | ||
| void * | private | ||
| ) |
Maps an array of physical pages to virtual memory in a given address space.
Will overwrite any existing mappings in the specified range.
vmm_map() for details on TLB shootdowns.| space | The target address space, if NULL, the kernel space is used. |
| virtAddr | The desired virtual address to map to, if NULL, the kernel chooses an available address. |
| pages | An array of physical page addresses to map. |
| pageAmount | The number of physical pages in the pages array, must not be zero. |
| flags | The page table flags for the mapping, must have PML_PRESENT set. |
| func | The callback function to call when the mapped memory is unmapped or the address space is freed. If NULL, then no callback will be called. |
| private | Private data to pass to the callback function. |
NULL and errno is set. Definition at line 285 of file vmm.c.
References EBUSY, EINVAL, ENOMEM, ENOSPC, EOK, ERR, errno, space_mapping_t::flags, NULL, PAGE_SIZE, page_table_is_pinned(), page_table_is_unmapped(), page_table_map_pages(), space_mapping_t::pageAmount, pageAmount, space_t::pageTable, PML_CALLBACK_NONE, PML_MAX_CALLBACK, PML_PRESENT, space_alloc_callback(), space_free_callback(), space_mapping_end(), space_mapping_start(), space_mapping_t::virtAddr, vmm_get_kernel_space(), and vmm_page_table_unmap_with_shootdown().
Referenced by shmem_mmap(), and shmem_object_allocate_pages().
| pml_flags_t vmm_prot_to_flags | ( | prot_t | prot | ) |
Converts the user space memory protection flags to page table entry flags.
| prot | The memory protection flags. |
Definition at line 143 of file vmm.c.
References PML_PRESENT, PML_WRITE, PROT_NONE, PROT_READ, and PROT_WRITE.
Referenced by SYSCALL_DEFINE(), and SYSCALL_DEFINE().
| uint64_t vmm_protect | ( | space_t * | space, |
| void * | virtAddr, | ||
| uint64_t | length, | ||
| pml_flags_t | flags | ||
| ) |
Changes memory protection flags for a virtual memory region in a given address space.
The memory region must be fully mapped, otherwise this function will fail.
When changing memory protection flags, there is a need for TLB shootdowns on all CPUs that have the address space loaded. To perform the shootdown we first update the page entries for the region, perform the shootdown, and wait for acknowledgements from all CPUs and finally return.
| space | The target address space, if NULL, the kernel space is used. |
| virtAddr | The virtual address of the memory region. |
| length | The length of the memory region, in bytes. |
| flags | The new page table flags for the memory region, if PML_PRESENT is not set, the memory will be unmapped. |
0. On failure, ERR and errno is set. Definition at line 401 of file vmm.c.
References EBUSY, EINVAL, ENOENT, EOK, ERR, errno, space_mapping_t::flags, NULL, page_table_is_pinned(), page_table_is_unmapped(), page_table_set_flags(), space_mapping_t::pageAmount, space_t::pageTable, PML_PRESENT, space_mapping_end(), space_mapping_start(), space_tlb_shootdown(), space_mapping_t::virtAddr, vmm_get_kernel_space(), and vmm_unmap().
Referenced by loader_load_program(), and SYSCALL_DEFINE().
| void vmm_shootdown_handler | ( | interrupt_frame_t * | frame, |
| cpu_t * | self | ||
| ) |
TLB shootdown interrupt handler.
| frame | The interrupt frame. |
| self | The current CPU. |
Definition at line 458 of file vmm.c.
References assert, atomic_fetch_add, vmm_cpu_ctx_t::lock, lock_acquire(), lock_release(), NULL, vmm_shootdown_t::pageAmount, space_t::shootdownAcks, vmm_cpu_ctx_t::shootdownCount, vmm_cpu_ctx_t::shootdowns, vmm_shootdown_t::space, tlb_invalidate(), vmm_shootdown_t::virtAddr, and cpu_t::vmm.
Referenced by interrupt_handler().
Unmaps virtual memory from a given address space.
If the memory is already unmapped, this function will do nothing.
When unmapping memory, there is a need for TLB shootdowns on all CPUs that have the address space loaded. To perform the shootdown we first set all page entries for the region to be non-present, perform the shootdown, wait for acknowledgements from all CPUs, and finally free any underlying physical memory if the PML_OWNED flag is set.
| space | The target address space, if NULL, the kernel space is used. |
| virtAddr | The virtual address of the memory region. |
| length | The length of the memory region, in bytes. |
0. On failure, ERR and errno is set. Definition at line 339 of file vmm.c.
References assert, BITMAP_FOR_EACH_SET, space_t::callbackBitmap, space_t::callbacks, EBUSY, EINVAL, EOK, ERR, errno, space_callback_t::func, NULL, page_table_collect_callbacks(), page_table_is_pinned(), space_callback_t::pageAmount, space_mapping_t::pageAmount, space_t::pageTable, PML_MAX_CALLBACK, PML_NONE, space_callback_t::private, space_free_callback(), space_mapping_end(), space_mapping_start(), space_mapping_t::virtAddr, vmm_get_kernel_space(), and vmm_page_table_unmap_with_shootdown().
Referenced by stack_pointer_deinit(), SYSCALL_DEFINE(), and vmm_protect().
| void vmm_unmap_bootloader_lower_half | ( | thread_t * | bootThread | ) |
Unmaps the lower half of the address space after kernel initialization.
Will unmap the lower half from both the kernel space and the boot thread's address space.
After this is called the bootloaders lower half mappings will be destroyed and the kernel will only have its own mappings.
| bootThread | The boot thread, which will have its address space modified. |
Definition at line 129 of file vmm.c.
References bootThread, pml_t::entries, kernelSpace, space_t::pageTable, page_table_t::pml4, PML_INDEX_LOWER_HALF_MAX, PML_INDEX_LOWER_HALF_MIN, thread_t::process, pml_entry_t::raw, and process_t::space.
Referenced by init_finalize().