From 4e0ad81770c993c62553faa01d31ebc2db303526 Mon Sep 17 00:00:00 2001 From: Achilleas Anagnostopoulos Date: Wed, 12 Jul 2017 23:31:54 +0100 Subject: [PATCH] Reload GDT with the descriptor VMA once the CPU switches to 64-bit mode The GDT is initially loaded in the 32-bit rt0 code where we cannot use the 48-bit VMA for the GDT table and instead we use its physical address. This approach works as the rt0 code establishes an identity mapping for the region 0-8M. However, when the kernel creates a more granular PDT it only includes the VMA addresses for the kernel ELF image sections making the 0-8M invalid. Unless the GDT is reloaded with the VMA of the table, the CPU will cause a non-recoverable page fault when it tries to restore the segment registers while returning from a recoverable page fault. --- src/arch/x86_64/asm/rt0_32.s | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/src/arch/x86_64/asm/rt0_32.s b/src/arch/x86_64/asm/rt0_32.s index ae4b124..e7a227f 100644 --- a/src/arch/x86_64/asm/rt0_32.s +++ b/src/arch/x86_64/asm/rt0_32.s @@ -355,6 +355,20 @@ write_string: ;------------------------------------------------------------------------------ bits 64 _rt0_64_entry_trampoline: + ; The currently loaded GDT points to the physical address of gdt0. This + ; works for now since we identity map the first 8M of the kernel. When + ; we set up a proper PDT for the VMA address of the kernel, the 0-8M + ; mapping will be invalid causing a page fault when the CPU tries to + ; restore the segment registers while returning from the page fault + ; handler. + ; + ; To fix this, we need to update the GDT so it uses the 48-bit virtual + ; address of gdt0. + mov rax, gdt0_desc + mov rbx, gdt0 + mov qword [rax+2], rbx + lgdt [rax] + mov rsp, stack_top ; now that paging is enabled we can load the stack ; with the virtual address of the allocated stack.