1
0
mirror of https://github.com/taigrr/gopher-os synced 2025-01-18 04:43:13 -08:00

10 Commits

Author SHA1 Message Date
Achilleas Anagnostopoulos
d17f582c0b Reserve space for redirect table and install trampolines
The rt0_64 code reserves space for _rt0_redirect_table using the output
from the redirect tool's "count" command as a hint to the size of the
table. The table itself is located in the .goredirectstbl section which
the linker moves to a dedicated section in the final ELF image.

When the kernel boots, the _rt0_install_redirect_trampolines function
iterates the _rt0_redirect_table entries (populated as a post-link step)
and overwrite the original function code with a trampoline that
redirects control to the destination function.

The trampoline is implemented as a 14-byte instruction that exploits
rip-relative addressing to ensure that no registers are made dirty. The
actual trampoline code looks like this:

jmp [rip+0]                 ; 6-bytes
dq abs_address_to_jump_to   ; 8-bytes

The _rt0_install_redirect_trampolines function sets up the abs_address
to "dst" for each (src, dst) tuple and then copies the trampoline to
"src". After the trampoline is installed, any calls to "src" will be
transparently redirected to "dst". This hack (modifying code in the
.text section) is only possible because the code runs in supervisor mode
before memory protection is enabled.
2017-06-25 21:39:20 +01:00
Achilleas Anagnostopoulos
56d23f50ae Enable page write protection for both kernel and user space
If the WP bit in CR0 is not set then write protection for pages is only
enforced for user-land code.
2017-06-22 06:24:30 +01:00
Achilleas Anagnostopoulos
5a2efb2bd3 Load IDT and define gate error-code-aware handlers (rt0/x86_64)
The rt0_64 code will load a blank IDT with 256 entries (the max number
of supported interrupts in the x86_64 architecture). Each IDT entry is
set as *not present* but its handler is set to a dedicated gate entrypoint
defined in the rt0 code.

A gate entrypoint is defined for each interrupt number using a nasm
macro. Each entrypoint will then use the interrupt number to index a
list of pointers (defined and managed by the Go assembly code in
the irq pkg) to the registered interrupt handlers and push its address
on the stack before jumping to one of the two available gate dispatching
functions (some interrupts also push an error code to the stack which
must be popped before returning from the interrupt handler):
- _rt0_64_gate_dispatcher_with_code
- _rt0_64_gate_dispatcher_without_code

Both dispatchers operate in the same way:
- they save the original registers
- they invoke the interrupt handler
- they restore the original registers
- ensure that the stack pointer (rsp) points to the exception frame
  pushed by the CPU

The difference between the dispatchers is that the "with_code" variant
will invoke a handler with signature `func(code, &frame, &regs)` and
ensure that the code is popped off the stack before returning from the
interrupt while the "without_code" variant will invoke a handler with
signature `func(&frame, &regs)`
2017-06-21 17:46:41 +01:00
Achilleas Anagnostopoulos
827f1a171f Load SS register value to DS_SEG when setting up GDT
If not set then the CPU wil generate a GPF exception when returning from
an interrupt handler
2017-06-21 17:29:29 +01:00
Achilleas Anagnostopoulos
1b88764676 Enable support for the no-execute (NX) bit 2017-06-18 09:46:04 +01:00
Achilleas Anagnostopoulos
886c7b10fa Merge pull request #24 from achilleasa/refactor-bootmem-allocator
Refactor bootmem allocator
2017-06-18 09:36:56 +01:00
Achilleas Anagnostopoulos
c81fd8b758 Pass kernel start/end physical address to Kmain 2017-06-18 09:15:51 +01:00
Tw
533ce2f2ea fix build failure because of _cgo_yield
Signed-off-by: Tw <tw19881113@gmail.com>
2017-06-17 20:16:31 +08:00
Achilleas Anagnostopoulos
99e4bedb74 Recursively map last P4 entry to itself
This allows us to use specially-crafted virtual memory addresses to
remove indirection levels and access the actual page table entries.
2017-05-31 17:02:56 +01:00
Achilleas Anagnostopoulos
2558f79fbf Switch to a 64-bit version of the kernel and rt0 code
The switch to 64-bit mode allows us to use 48-bit addressing and to
relocate the kernel to virtual address 0xffff800000000000 + 1M. The
actual kernel is loaded by the bootloader at physical address 1M.

The rt0 code has been split in two parts. The 32-bit part provides the
entrypoint that the bootloader jumps to after loading the kernel. Its
purpose is to make sure that:
- the kernel was booted by a multiboot-compliant bootloader
- the multiboot info structures are copied to a reserved memory block
  where they can be accessed after enabling paging
- the CPU meets the minimum requirements for the kernel (CPUID, SSE,
  support for long-mode)

Since paging is not enabled when the 32-bit code runs, it needs to
translate all memory addresses it accesses to physical memory addresses
by subtracting PAGE_OFFSET. The 32-bit rt0 code will set up a page table
that identity-maps region: 0 to 8M and region: PAGE_OFFSET to
PAGE_OFFSET+8M. This ensures that when paging gets enabled, we will still
be able to access the kernel using both physical and virtual memory
addresses. After enabling paging, the 32-bit rt0 will jump to a small
64-bit trampoline function that updates the stack pointer to use the
proper virtual address and jumps to the virtual address of the 64-bit
entry point.

The 64-bit entrypoint sets up the minimal g0 structure required by the
go function prologue for stack checks and sets up the FS register to
point to it. The principle is the same as with 32-bit code (a segment
register has the address of a pointer to the active g) with the
difference that in 64-bit mode, the FS register is used instead of GS
and that in order to set its value we need to write to a MSR.
2017-05-03 21:37:53 +01:00