PatchworkOS  c9fea19
A non-POSIX operating system.
Loading...
Searching...
No Matches

Virtual Memory Manager (VMM). More...

Collaboration diagram for VMM:

Detailed Description

Virtual Memory Manager (VMM).

The Virtual Memory Manager (VMM) is responsible for allocating and mapping virtual memory.

TLB Shootdowns

When we change a mapping in a address space its possible that other CPUs have the same address space loaded and have the old mappings in their "TLB", which is a hardware feature letting the CPUs cache page table entries. This cache must be cleared when we change the mappings of a page table. This is called a TLB shootdown.

Details can be found in vmm_map(), vmm_unmap() and vmm_protect().

Address Space Layout

The address space layout is split into several regions. For convenience, the regions are defined using page table indices, as in the entire virtual address space is divided into 512 regions, each mapped by one entry in the top level page table (PML4) with 256 entries for the lower half and 256 entries for the higher half. By doing this we can very easily copy mappings between address spaces by just copying the relevant PML4 entries.

First, at the very top, we have the kernel binary itself and all its data, code, bss, rodata, etc. This region uses the last index in the page table. This region will never be fully filled and the kernel itself is not guaranteed to be loaded at the very start of this region, the exact address is decided by the linker.lds script. This section is mapped identically for all processes.

Secondly, we have the per-thread kernel stacks, one stack per thread. Each stack is allocated on demand and can grow dynamically up to CONFIG_MAX_KERNEL_STACK_PAGES pages not including its guard page. This section takes up 2 indices in the page table and will be process-specific as each process has its own threads and thus its own kernel stacks.

Thirdly, we have the kernel heap, which is used for dynamic memory allocation in the kernel. The kernel heap starts at VMM_KERNEL_HEAP_MIN and grows up towards VMM_KERNEL_HEAP_MAX. This section takes up 2 indices in the page table and is mapped identically for all processes.

Fourthly (is fourthly really a word?), we have the identity mapped physical memory. All physical memory will be mapped here by simply taking the original physical address and adding 0xFFFF800000000000 to it. This means that the physical address 0x123456 will be mapped to the virtual address 0xFFFF800000123456. This section takes up all remaining indices below the kernel heap to the end of the higher half and is mapped identically for all processes.

Fithly, we have non-canonical memory, which is impossible to access and will trigger a general protection fault if accessed. This section takes up the gap between the lower half and higher half of the address space.

Finally, we have user space, which starts at 0x400000 (4MiB) and goes up to the top of the lower half. The first 4MiB is left unmapped to catch null pointer dereferences. This section is different for each process.

Data Structures

struct  vmm_shootdown_t
 TLB shootdown structure. More...
 
struct  vmm_cpu_ctx_t
 Per-CPU VMM context. More...
 

Macros

#define VMM_KERNEL_BINARY_MAX   PML_HIGHER_HALF_END
 The maximum address for the content of the kernel binary.
 
#define VMM_KERNEL_BINARY_MIN    PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 1, PML4)
 The minimum address for the content of the kernel binary./*#end#*‍/.
 
#define VMM_KERNEL_STACKS_MAX   VMM_KERNEL_BINARY_MIN
 The maximum address for kernel stacks.
 
#define VMM_KERNEL_STACKS_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 3, PML4)
 The minimum address for kernel stacks.
 
#define VMM_KERNEL_HEAP_MAX   VMM_KERNEL_STACKS_MIN
 The maximum address for the kernel heap.
 
#define VMM_KERNEL_HEAP_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 5, PML4)
 The minimum address for the kernel heap.
 
#define VMM_IDENTITY_MAPPED_MAX   VMM_KERNEL_HEAP_MIN
 The maximum address for the identity mapped physical memory.
 
#define VMM_IDENTITY_MAPPED_MIN   PML_HIGHER_HALF_START
 The minimum address for the identity mapped physical memory.
 
#define VMM_USER_SPACE_MAX   PML_LOWER_HALF_END
 The maximum address for user space.
 
#define VMM_USER_SPACE_MIN   (0x400000)
 The minimum address for user space.
 
#define VMM_IS_PAGE_ALIGNED(addr)   (((uintptr_t)(addr) & (PAGE_SIZE - 1)) == 0)
 Check if an address is page aligned.
 
#define VMM_MAX_SHOOTDOWN_REQUESTS   16
 Maximum number of shootdown requests that can be queued per CPU.
 

Enumerations

enum  vmm_alloc_flags_t { VMM_ALLOC_OVERWRITE = 0 << 0 , VMM_ALLOC_FAIL_IF_MAPPED = 1 << 0 }
 Flags for vmm_alloc(). More...
 

Functions

void vmm_init (void)
 Initializes the Virtual Memory Manager.
 
void vmm_kernel_space_load (void)
 Loads the kernel's address space into the current CPU.
 
void vmm_cpu_ctx_init (vmm_cpu_ctx_t *ctx)
 Initializes a per-CPU VMM context and performs per-CPU VMM initialization.
 
space_tvmm_kernel_space_get (void)
 Retrieves the kernel's address space.
 
pml_flags_t vmm_prot_to_flags (prot_t prot)
 Converts the user space memory protection flags to page table entry flags.
 
void * vmm_alloc (space_t *space, void *virtAddr, uint64_t length, pml_flags_t pmlFlags, vmm_alloc_flags_t allocFlags)
 Allocates and maps virtual memory in a given address space.
 
void * vmm_map (space_t *space, void *virtAddr, void *physAddr, uint64_t length, pml_flags_t flags, space_callback_func_t func, void *private)
 Maps physical memory to virtual memory in a given address space.
 
void * vmm_map_pages (space_t *space, void *virtAddr, void **pages, uint64_t pageAmount, pml_flags_t flags, space_callback_func_t func, void *private)
 Maps an array of physical pages to virtual memory in a given address space.
 
void * vmm_unmap (space_t *space, void *virtAddr, uint64_t length)
 Unmaps virtual memory from a given address space.
 
void * vmm_protect (space_t *space, void *virtAddr, uint64_t length, pml_flags_t flags)
 Changes memory protection flags for a virtual memory region in a given address space.
 

Macro Definition Documentation

◆ VMM_KERNEL_BINARY_MAX

#define VMM_KERNEL_BINARY_MAX   PML_HIGHER_HALF_END

The maximum address for the content of the kernel binary.

Definition at line 61 of file vmm.h.

◆ VMM_KERNEL_BINARY_MIN

#define VMM_KERNEL_BINARY_MIN    PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 1, PML4)

The minimum address for the content of the kernel binary./*#end#*‍/.

Definition at line 63 of file vmm.h.

◆ VMM_KERNEL_STACKS_MAX

#define VMM_KERNEL_STACKS_MAX   VMM_KERNEL_BINARY_MIN

The maximum address for kernel stacks.

Definition at line 65 of file vmm.h.

◆ VMM_KERNEL_STACKS_MIN

#define VMM_KERNEL_STACKS_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 3, PML4)

The minimum address for kernel stacks.

Definition at line 66 of file vmm.h.

◆ VMM_KERNEL_HEAP_MAX

#define VMM_KERNEL_HEAP_MAX   VMM_KERNEL_STACKS_MIN

The maximum address for the kernel heap.

Definition at line 68 of file vmm.h.

◆ VMM_KERNEL_HEAP_MIN

#define VMM_KERNEL_HEAP_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 5, PML4)

The minimum address for the kernel heap.

Definition at line 69 of file vmm.h.

◆ VMM_IDENTITY_MAPPED_MAX

#define VMM_IDENTITY_MAPPED_MAX   VMM_KERNEL_HEAP_MIN

The maximum address for the identity mapped physical memory.

Definition at line 71 of file vmm.h.

◆ VMM_IDENTITY_MAPPED_MIN

#define VMM_IDENTITY_MAPPED_MIN   PML_HIGHER_HALF_START

The minimum address for the identity mapped physical memory.

Definition at line 72 of file vmm.h.

◆ VMM_USER_SPACE_MAX

#define VMM_USER_SPACE_MAX   PML_LOWER_HALF_END

The maximum address for user space.

Definition at line 74 of file vmm.h.

◆ VMM_USER_SPACE_MIN

#define VMM_USER_SPACE_MIN   (0x400000)

The minimum address for user space.

Definition at line 75 of file vmm.h.

◆ VMM_IS_PAGE_ALIGNED

#define VMM_IS_PAGE_ALIGNED (   addr)    (((uintptr_t)(addr) & (PAGE_SIZE - 1)) == 0)

Check if an address is page aligned.

Parameters
addrThe address to check.
Returns
true if the address is page aligned, false otherwise.

Definition at line 83 of file vmm.h.

◆ VMM_MAX_SHOOTDOWN_REQUESTS

#define VMM_MAX_SHOOTDOWN_REQUESTS   16

Maximum number of shootdown requests that can be queued per CPU.

Definition at line 102 of file vmm.h.

Enumeration Type Documentation

◆ vmm_alloc_flags_t

Flags for vmm_alloc().

Enumerator
VMM_ALLOC_OVERWRITE 

If any page is already mapped, overwrite the mapping.

VMM_ALLOC_FAIL_IF_MAPPED 

If set and any page is already mapped, fail and set errno to EEXIST.

Definition at line 121 of file vmm.h.

Function Documentation

◆ vmm_init()

void vmm_init ( void  )

Initializes the Virtual Memory Manager.

Definition at line 41 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_kernel_space_load()

void vmm_kernel_space_load ( void  )

Loads the kernel's address space into the current CPU.

Definition at line 110 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_cpu_ctx_init()

void vmm_cpu_ctx_init ( vmm_cpu_ctx_t ctx)

Initializes a per-CPU VMM context and performs per-CPU VMM initialization.

Must be called on the CPU that owns the context.

Parameters
ctxThe CPU VMM context to initialize.

Definition at line 122 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_kernel_space_get()

space_t * vmm_kernel_space_get ( void  )

Retrieves the kernel's address space.

Returns
Pointer to the kernel's address space.

Definition at line 133 of file vmm.c.

Here is the caller graph for this function:

◆ vmm_prot_to_flags()

pml_flags_t vmm_prot_to_flags ( prot_t  prot)

Converts the user space memory protection flags to page table entry flags.

Parameters
protThe memory protection flags.
Returns
The corresponding page table entry flags.

Definition at line 138 of file vmm.c.

Here is the caller graph for this function:

◆ vmm_alloc()

void * vmm_alloc ( space_t space,
void *  virtAddr,
uint64_t  length,
pml_flags_t  pmlFlags,
vmm_alloc_flags_t  allocFlags 
)

Allocates and maps virtual memory in a given address space.

The allocated memory will be backed by newly allocated physical memory pages and is not guaranteed to be zeroed.

See also
vmm_map() for details on TLB shootdowns.
Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe desired virtual address. If NULL, the kernel chooses an available address.
lengthThe length of the virtual memory region to allocate, in bytes.
pmlFlagsThe page table flags for the mapping, will always include PML_OWNED, must have PML_PRESENT set.
allocFlagsThe allocation flags.
Returns
On success, the virtual address. On failure, returns NULL and errno is set to:
  • EINVAL: Invalid parameters.
  • EBUSY: The region contains pinned pages.
  • EEXIST: The region is already mapped and VMM_ALLOC_FAIL_IF_MAPPED is set.
  • ENOMEM: Not enough memory.
  • Other values from space_mapping_start().

Definition at line 163 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_map()

void * vmm_map ( space_t space,
void *  virtAddr,
void *  physAddr,
uint64_t  length,
pml_flags_t  flags,
space_callback_func_t  func,
void *  private 
)

Maps physical memory to virtual memory in a given address space.

Will overwrite any existing mappings in the specified range.

When mapping a page there is no need for a TLN shootdown as any previous access to that page will cause a non-present page fault. However if the page is already mapped then it must first be unmapped as described in vmm_unmap().

Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe desired virtual address to map to, if NULL, the kernel chooses an available address.
physAddrThe physical address to map from. Must not be NULL.
lengthThe length of the memory region to map, in bytes.
flagsThe page table flags for the mapping, must have PML_PRESENT set.
funcThe callback function to call when the mapped memory is unmapped or the address space is freed. If NULL, then no callback will be called.
privatePrivate data to pass to the callback function.
Returns
On success, the virtual address. On failure, returns NULL and errno is set to:
  • EINVAL: Invalid parameters.
  • EBUSY: The region contains pinned pages.
  • ENOSPC: No available callback slots.
  • ENOMEM: Not enough memory.
  • Other values from space_mapping_start().

Definition at line 226 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_map_pages()

void * vmm_map_pages ( space_t space,
void *  virtAddr,
void **  pages,
uint64_t  pageAmount,
pml_flags_t  flags,
space_callback_func_t  func,
void *  private 
)

Maps an array of physical pages to virtual memory in a given address space.

Will overwrite any existing mappings in the specified range.

See also
vmm_map() for details on TLB shootdowns.
Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe desired virtual address to map to, if NULL, the kernel chooses an available address.
pagesAn array of physical page addresses to map.
pageAmountThe number of physical pages in the pages array, must not be zero.
flagsThe page table flags for the mapping, must have PML_PRESENT set.
funcThe callback function to call when the mapped memory is unmapped or the address space is freed. If NULL, then no callback will be called.
privatePrivate data to pass to the callback function.
Returns
On success, the virtual address. On failure, returns NULL and errno is set to:
  • EINVAL: Invalid parameters.
  • EBUSY: The region contains pinned pages.
  • ENOSPC: No available callback slots.
  • ENOMEM: Not enough memory.
  • Other values from space_mapping_start().

Definition at line 280 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_unmap()

void * vmm_unmap ( space_t space,
void *  virtAddr,
uint64_t  length 
)

Unmaps virtual memory from a given address space.

If the memory is already unmapped, this function will do nothing.

When unmapping memory, there is a need for TLB shootdowns on all CPUs that have the address space loaded. To perform the shootdown we first set all page entries for the region to be non-present, perform the shootdown, wait for acknowledgements from all CPUs, and finally free any underlying physical memory if the PML_OWNED flag is set.

Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe virtual address of the memory region.
lengthThe length of the memory region, in bytes.
Returns
On success, virtAddr. On failure, NULL and errno is set to:
  • EINVAL: Invalid parameters.
  • EBUSY: The region contains pinned pages.
  • Other values from space_mapping_start().

Definition at line 334 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_protect()

void * vmm_protect ( space_t space,
void *  virtAddr,
uint64_t  length,
pml_flags_t  flags 
)

Changes memory protection flags for a virtual memory region in a given address space.

The memory region must be fully mapped, otherwise this function will fail.

When changing memory protection flags, there is a need for TLB shootdowns on all CPUs that have the address space loaded. To perform the shootdown we first update the page entries for the region, perform the shootdown, and wait for acknowledgements from all CPUs and finally return.

Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe virtual address of the memory region.
lengthThe length of the memory region, in bytes.
flagsThe new page table flags for the memory region, if PML_PRESENT is not set, the memory will be unmapped.
Returns
On success, virtAddr. On failure, NULL and errno is set to:
  • EINVAL: Invalid parameters.
  • EBUSY: The region contains pinned pages.
  • ENOENT: The region is unmapped, or only partially mapped.
  • Other values from space_mapping_start().

Definition at line 396 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function: