Fixes VMR creating off-by-one If a VMR would just barely fit before the first VMR, we would fail to use that slot. Practically, this only happens if you decide to do a MAP_FIXED at a low address, which will unmap a chunk of ld.so - not recommended!
diff --git a/kern/arch/x86/ros/mmu64.h b/kern/arch/x86/ros/mmu64.h index be2058e..9e692be 100644 --- a/kern/arch/x86/ros/mmu64.h +++ b/kern/arch/x86/ros/mmu64.h
@@ -118,6 +118,13 @@ * | Program Data & Heap | * | | * +------------------------------+ 0x0000000000400000 + * . . + * . . + * |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~| + * | | + * | ld.so (Dynamic) | + * | | + * MMAP_LOWEST_VA +------------------------------+ 0x0000000000100000 * | | * | Empty Memory (*) | * | |
diff --git a/kern/src/mm.c b/kern/src/mm.c index 2b10536..bf93513 100644 --- a/kern/src/mm.c +++ b/kern/src/mm.c
@@ -58,7 +58,7 @@ vm_i = TAILQ_FIRST(&p->vm_regions); /* This works for now, but if all we have is BRK_END ones, we'll start * growing backwards (TODO) */ - if (!vm_i || (va + len < vm_i->vm_base)) { + if (!vm_i || (va + len <= vm_i->vm_base)) { vmr = kmem_cache_alloc(vmr_kcache, 0); if (!vmr) panic("EOM!");