PatchworkOS  19e446b
A non-POSIX operating system.
Loading...
Searching...
No Matches

Virtual Memory Manager (VMM). More...

Collaboration diagram for VMM:

Detailed Description

Virtual Memory Manager (VMM).

The Virtual Memory Manager (VMM) is responsible for allocating and mapping virtual memory.

TLB Shootdowns

When we change a mapping in a address space its possible that other CPUs have the same address space loaded and have the old mappings in their "TLB", which is a hardware feature letting the CPUs cache page table entries. This cache must be cleared when we change the mappings of a page table. This is called a TLB shootdown.

Details can be found in vmm_map(), vmm_unmap() and vmm_protect().

Address Space Layout

The address space layout is split into several regions. For convenience, the regions are defined using page table indices, as in the entire virtual address space is divided into 512 regions, each mapped by one entry in the top level page table (PML4) with 256 entries for the lower half and 256 entries for the higher half. By doing this we can very easily copy mappings between address spaces by just copying the relevant PML4 entries.

First, at the very top, we have the kernel binary itself and all its data, code, bss, rodata, etc. This region uses the last index in the page table. This region will never be fully filled and the kernel itself is not guaranteed to be loaded at the very start of this region, the exact address is decided by the linker.lds script. This section is mapped identically for all processes.

Secondly, we have the per-thread kernel stacks, one stack per thread. Each stack is allocated on demand and can grow dynamically up to CONFIG_MAX_KERNEL_STACK_PAGES pages not including its guard page. This section takes up 2 indices in the page table and will be process-specific as each process has its own threads and thus its own kernel stacks.

Thirdly, we have the kernel heap, which is used for dynamic memory allocation in the kernel. The kernel heap starts at VMM_KERNEL_HEAP_MIN and grows up towards VMM_KERNEL_HEAP_MAX. This section takes up 2 indices in the page table and is mapped identically for all processes.

Fourthly, we have the identity mapped physical memory. All physical memory will be mapped here by simply taking the original physical address and adding 0xFFFF800000000000 to it. This means that the physical address 0x123456 will be mapped to the virtual address 0xFFFF800000123456. This section takes up all remaining indices below the kernel heap to the end of the higher half and is mapped identically for all processes.

Fithly, we have non-canonical memory, which is impossible to access and will trigger a general protection fault if accessed. This section takes up the gap between the lower half and higher half of the address space.

Finally, we have user space, which starts at 0x400000 (4MiB) and goes up to the top of the lower half. The first 4MiB is left unmapped to catch null pointer dereferences. This section is different for each process.

Data Structures

struct  vmm_shootdown_t
 TLB shootdown structure. More...
 
struct  vmm_cpu_t
 Per-CPU VMM context. More...
 

Macros

#define VMM_KERNEL_BINARY_MAX   PML_HIGHER_HALF_END
 The maximum address for the content of the kernel binary.
 
#define VMM_KERNEL_BINARY_MIN    PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 1, PML4)
 The minimum address for the content of the kernel binary./*#end#*‍/.
 
#define VMM_KERNEL_STACKS_MAX   VMM_KERNEL_BINARY_MIN
 The maximum address for kernel stacks.
 
#define VMM_KERNEL_STACKS_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 3, PML4)
 The minimum address for kernel stacks.
 
#define VMM_KERNEL_HEAP_MAX   VMM_KERNEL_STACKS_MIN
 The maximum address for the kernel heap.
 
#define VMM_KERNEL_HEAP_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 5, PML4)
 The minimum address for the kernel heap.
 
#define VMM_IDENTITY_MAPPED_MAX   VMM_KERNEL_HEAP_MIN
 The maximum address for the identity mapped physical memory.
 
#define VMM_IDENTITY_MAPPED_MIN   PML_HIGHER_HALF_START
 The minimum address for the identity mapped physical memory.
 
#define VMM_USER_SPACE_MAX   PML_LOWER_HALF_END
 The maximum address for user space.
 
#define VMM_USER_SPACE_MIN   (0x400000)
 The minimum address for user space.
 
#define VMM_IS_PAGE_ALIGNED(addr)   (((uintptr_t)(addr) & (PAGE_SIZE - 1)) == 0)
 Check if an address is page aligned.
 
#define VMM_MAX_SHOOTDOWN_REQUESTS   16
 Maximum number of shootdown requests that can be queued per CPU.
 

Enumerations

enum  vmm_alloc_flags_t { VMM_ALLOC_OVERWRITE = 0 << 0 , VMM_ALLOC_FAIL_IF_MAPPED = 1 << 0 , VMM_ALLOC_ZERO = 1 << 1 }
 Flags for vmm_alloc(). More...
 

Functions

void vmm_init (void)
 Initializes the Virtual Memory Manager.
 
void vmm_kernel_space_load (void)
 Loads the kernel's address space into the current CPU.
 
space_tvmm_kernel_space_get (void)
 Retrieves the kernel's address space.
 
pml_flags_t vmm_prot_to_flags (prot_t prot)
 Converts the user space memory protection flags to page table entry flags.
 
void * vmm_alloc (space_t *space, void *virtAddr, size_t length, size_t alignment, pml_flags_t pmlFlags, vmm_alloc_flags_t allocFlags)
 Allocates and maps virtual memory in a given address space.
 
void * vmm_map (space_t *space, void *virtAddr, phys_addr_t physAddr, size_t length, pml_flags_t flags, space_callback_func_t func, void *data)
 Maps physical memory to virtual memory in a given address space.
 
void * vmm_map_pages (space_t *space, void *virtAddr, pfn_t *pfns, size_t amount, pml_flags_t flags, space_callback_func_t func, void *data)
 Maps an array of physical pages to virtual memory in a given address space.
 
void * vmm_unmap (space_t *space, void *virtAddr, size_t length)
 Unmaps virtual memory from a given address space.
 
void * vmm_protect (space_t *space, void *virtAddr, size_t length, pml_flags_t flags)
 Changes memory protection flags for a virtual memory region in a given address space.
 
void vmm_load (space_t *space)
 Loads a virtual address space.
 
void vmm_tlb_shootdown (space_t *space, void *virtAddr, size_t pageAmount)
 Performs a TLB shootdown for a region of the address space, and wait for acknowledgements.
 

Macro Definition Documentation

◆ VMM_KERNEL_BINARY_MAX

#define VMM_KERNEL_BINARY_MAX   PML_HIGHER_HALF_END

The maximum address for the content of the kernel binary.

Definition at line 61 of file vmm.h.

◆ VMM_KERNEL_BINARY_MIN

#define VMM_KERNEL_BINARY_MIN    PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 1, PML4)

The minimum address for the content of the kernel binary./*#end#*‍/.

Definition at line 63 of file vmm.h.

◆ VMM_KERNEL_STACKS_MAX

#define VMM_KERNEL_STACKS_MAX   VMM_KERNEL_BINARY_MIN

The maximum address for kernel stacks.

Definition at line 65 of file vmm.h.

◆ VMM_KERNEL_STACKS_MIN

#define VMM_KERNEL_STACKS_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 3, PML4)

The minimum address for kernel stacks.

Definition at line 66 of file vmm.h.

◆ VMM_KERNEL_HEAP_MAX

#define VMM_KERNEL_HEAP_MAX   VMM_KERNEL_STACKS_MIN

The maximum address for the kernel heap.

Definition at line 68 of file vmm.h.

◆ VMM_KERNEL_HEAP_MIN

#define VMM_KERNEL_HEAP_MIN   PML_INDEX_TO_ADDR(PML_INDEX_AMOUNT - 5, PML4)

The minimum address for the kernel heap.

Definition at line 69 of file vmm.h.

◆ VMM_IDENTITY_MAPPED_MAX

#define VMM_IDENTITY_MAPPED_MAX   VMM_KERNEL_HEAP_MIN

The maximum address for the identity mapped physical memory.

Definition at line 71 of file vmm.h.

◆ VMM_IDENTITY_MAPPED_MIN

#define VMM_IDENTITY_MAPPED_MIN   PML_HIGHER_HALF_START

The minimum address for the identity mapped physical memory.

Definition at line 72 of file vmm.h.

◆ VMM_USER_SPACE_MAX

#define VMM_USER_SPACE_MAX   PML_LOWER_HALF_END

The maximum address for user space.

Definition at line 74 of file vmm.h.

◆ VMM_USER_SPACE_MIN

#define VMM_USER_SPACE_MIN   (0x400000)

The minimum address for user space.

Definition at line 75 of file vmm.h.

◆ VMM_IS_PAGE_ALIGNED

#define VMM_IS_PAGE_ALIGNED (   addr)    (((uintptr_t)(addr) & (PAGE_SIZE - 1)) == 0)

Check if an address is page aligned.

Parameters
addrThe address to check.
Returns
true if the address is page aligned, false otherwise.

Definition at line 83 of file vmm.h.

◆ VMM_MAX_SHOOTDOWN_REQUESTS

#define VMM_MAX_SHOOTDOWN_REQUESTS   16

Maximum number of shootdown requests that can be queued per CPU.

Definition at line 102 of file vmm.h.

Enumeration Type Documentation

◆ vmm_alloc_flags_t

Flags for vmm_alloc().

Enumerator
VMM_ALLOC_OVERWRITE 

If any page is already mapped, overwrite the mapping.

VMM_ALLOC_FAIL_IF_MAPPED 

If set and any page is already mapped, fail and set errno to EEXIST.

VMM_ALLOC_ZERO 

If set, atomically zero the allocated pages.

Definition at line 120 of file vmm.h.

Function Documentation

◆ vmm_init()

void vmm_init ( void  )

Initializes the Virtual Memory Manager.

Definition at line 47 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_kernel_space_load()

void vmm_kernel_space_load ( void  )

Loads the kernel's address space into the current CPU.

Definition at line 116 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_kernel_space_get()

space_t * vmm_kernel_space_get ( void  )

Retrieves the kernel's address space.

Returns
Pointer to the kernel's address space.

Definition at line 123 of file vmm.c.

Here is the caller graph for this function:

◆ vmm_prot_to_flags()

pml_flags_t vmm_prot_to_flags ( prot_t  prot)

Converts the user space memory protection flags to page table entry flags.

Parameters
protThe memory protection flags.
Returns
The corresponding page table entry flags.

Definition at line 128 of file vmm.c.

Here is the caller graph for this function:

◆ vmm_alloc()

void * vmm_alloc ( space_t space,
void *  virtAddr,
size_t  length,
size_t  alignment,
pml_flags_t  pmlFlags,
vmm_alloc_flags_t  allocFlags 
)

Allocates and maps virtual memory in a given address space.

The allocated memory will be backed by newly allocated physical memory pages and is not guaranteed to be zeroed.

See also
vmm_map() for details on TLB shootdowns.
Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe desired virtual address. If NULL, the kernel chooses an available address.
lengthThe length of the virtual memory region to allocate, in bytes.
alignmentThe required alignment for the virtual memory region in bytes.
pmlFlagsThe page table flags for the mapping, will always include PML_OWNED, must have PML_PRESENT set.
allocFlagsThe allocation flags.
Returns
On success, the virtual address. On failure, returns NULL and errno is set to:
  • EINVAL: Invalid parameters.
  • EBUSY: The region contains pinned pages.
  • EEXIST: The region is already mapped and VMM_ALLOC_FAIL_IF_MAPPED is set.
  • ENOMEM: Not enough memory.
  • Other values from space_mapping_start().

Definition at line 153 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_map()

void * vmm_map ( space_t space,
void *  virtAddr,
phys_addr_t  physAddr,
size_t  length,
pml_flags_t  flags,
space_callback_func_t  func,
void *  data 
)

Maps physical memory to virtual memory in a given address space.

Will overwrite any existing mappings in the specified range.

When mapping a page there is no need for a TLN shootdown as any previous access to that page will cause a non-present page fault. However if the page is already mapped then it must first be unmapped as described in vmm_unmap().

Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe desired virtual address to map to, if NULL, the kernel chooses an available address.
physAddrThe physical address to map from. Must not be PHYS_ADDR_INVALID.
lengthThe length of the memory region to map, in bytes.
flagsThe page table flags for the mapping, must have PML_PRESENT set.
funcThe callback function to call when the mapped memory is unmapped or the address space is freed. If NULL, then no callback will be called.
privatePrivate data to pass to the callback function.
Returns
On success, the virtual address. On failure, returns NULL and errno is set to:
  • EINVAL: Invalid parameters.
  • EBUSY: The region contains pinned pages.
  • ENOSPC: No available callback slots.
  • ENOMEM: Not enough memory.
  • Other values from space_mapping_start().

Definition at line 226 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_map_pages()

void * vmm_map_pages ( space_t space,
void *  virtAddr,
pfn_t pfns,
size_t  amount,
pml_flags_t  flags,
space_callback_func_t  func,
void *  data 
)

Maps an array of physical pages to virtual memory in a given address space.

Will overwrite any existing mappings in the specified range.

See also
vmm_map() for details on TLB shootdowns.
Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe desired virtual address to map to, if NULL, the kernel chooses an available address.
pfnsAn array of page frame numbers to map from.
amountThe number of pages to map.
flagsThe page table flags for the mapping, must have PML_PRESENT set.
funcThe callback function to call when the mapped memory is unmapped or the address space is freed. If NULL, then no callback will be called.
privatePrivate data to pass to the callback function.
Returns
On success, the virtual address. On failure, returns NULL and errno is set to:
  • EINVAL: Invalid parameters.
  • EBUSY: The region contains pinned pages.
  • ENOSPC: No available callback slots.
  • ENOMEM: Not enough memory.
  • Other values from space_mapping_start().

Definition at line 281 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_unmap()

void * vmm_unmap ( space_t space,
void *  virtAddr,
size_t  length 
)

Unmaps virtual memory from a given address space.

If the memory is already unmapped, this function will do nothing.

When unmapping memory, there is a need for TLB shootdowns on all CPUs that have the address space loaded. To perform the shootdown we first set all page entries for the region to be non-present, perform the shootdown, wait for acknowledgements from all CPUs, and finally free any underlying physical memory if the PML_OWNED flag is set.

Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe virtual address of the memory region.
lengthThe length of the memory region, in bytes.
Returns
On success, virtAddr. On failure, NULL and errno is set to:
  • EINVAL: Invalid parameters.
  • EBUSY: The region contains pinned pages.
  • Other values from space_mapping_start().

Definition at line 336 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_protect()

void * vmm_protect ( space_t space,
void *  virtAddr,
size_t  length,
pml_flags_t  flags 
)

Changes memory protection flags for a virtual memory region in a given address space.

The memory region must be fully mapped, otherwise this function will fail.

When changing memory protection flags, there is a need for TLB shootdowns on all CPUs that have the address space loaded. To perform the shootdown we first update the page entries for the region, perform the shootdown, and wait for acknowledgements from all CPUs and finally return.

Parameters
spaceThe target address space, if NULL, the kernel space is used.
virtAddrThe virtual address of the memory region.
lengthThe length of the memory region, in bytes.
flagsThe new page table flags for the memory region, if PML_PRESENT is not set, the memory will be unmapped.
Returns
On success, virtAddr. On failure, NULL and errno is set to:
  • EINVAL: Invalid parameters.
  • EBUSY: The region contains pinned pages.
  • ENOENT: The region is unmapped, or only partially mapped.
  • Other values from space_mapping_start().

Definition at line 398 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_load()

void vmm_load ( space_t space)

Loads a virtual address space.

Must be called with interrupts disabled.

Will do nothing if the space is already loaded.

Parameters
spaceThe address space to load.

Definition at line 442 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ vmm_tlb_shootdown()

void vmm_tlb_shootdown ( space_t space,
void *  virtAddr,
size_t  pageAmount 
)

Performs a TLB shootdown for a region of the address space, and wait for acknowledgements.

Must be called between space_mapping_start() and space_mapping_end().

This will cause all CPUs that have the address space loaded to invalidate their TLB entries for the specified region.

Will not affect the current CPU's TLB, that is handled by the page_table_t directly when modifying page table entries.

Todo:
Currently this does a busy wait for acknowledgements. Use a wait queue?
Parameters
spaceThe target address space.
virtAddrThe starting virtual address of the region.
pageAmountThe number of pages in the region.

Definition at line 499 of file vmm.c.

Here is the call graph for this function:
Here is the caller graph for this function: