justinian / jsix

A hobby operating system for x86_64, boots with UEFI.
Other
69 stars 12 forks source link
kernel os uefi x64 x86-64

jsix

The jsix operating system

jsix is a custom multi-core x64 operating system that I am building from scratch. It's far from finished, or even being usable (see the Status and Roadmap section, below) but all currently-planned major kernel features are now implemented to at least a passable level.

The design goals of the project are:

A note on the name: This kernel was originally named Popcorn, but I have since discovered that the Popcorn Linux project is also developing a kernel with that name, started around the same time as this project. So I've renamed this kernel jsix (Always styled jsix or j6, never capitalized) as an homage to L4, xv6, and my wonderful wife.

Status and Roadmap

The following major feature areas are targets for jsix development:

UEFI boot loader

Done. The bootloader loads the kernel and initial userspace programs, and sets up necessary kernel arguments about the memory map and EFI GOP framebuffer. Possible future ideas:

Memory

Virtual memory: Sufficient. The kernel manages virtual memory with a number of kinds of vm_area objects representing mapped areas, which can belong to one or more vm_space objects which represent a whole virtual memory space. (Each process has a vm_space, and so does the kernel itself.)

Remaining to do:

Physical page allocation: Sufficient. The current physical page allocator implementation uses a group of blocks representing up-to-1GiB areas of usable memory as defined by the bootloader. Each block has a three-level bitmap denoting free/used pages.

Future work:

Multitasking

Sufficient. The global scheduler object keeps separate ready/blocked lists per core. Cores periodically attempt to balance load via work stealing.

User-space tasks are able to create threads as well as other processes.

API

Syscalls: Sufficient. User-space tasks are able to make syscalls to the kernel via fast SYSCALL/SYSRET instructions. Syscalls made via libj6 look to both the callee and the caller like standard SysV ABI function calls. The implementations are wrapped in generated wrapper functions which validate the request, check capabilities, and find the appropriate kernel objects or handles before calling the implementation functions.

IPC: Working, needs optimization. The current IPC primitives are:

Hardware Support

Building

jsix uses the Ninja build tool, and generates the build files for it with the configure script. The build also relies on a custom toolchain sysroot, which can be downloaded or built using the scripts in jsix-os/toolchain.

Other build dependencies:

The configure script has some Python dependencies - these can be installed via pip, though doing so in a python virtual environment is recommended. Installing via pip will also install ninja.

A Debian 11 (Bullseye) system can be configured with the necessary build dependencies by running the following commands from the jsix repository root:

sudo apt install clang lld nasm mtools python3-pip python3-venv
python3 -m venv ./venv
source venv/bin/activate
pip install -r requirements.txt
peru sync

Setting up the sysroot

Build or download the toolchain sysroot as mentioned above with jsix-os/toolchain, and symlink the built toolchain directory as sysroot at the root of this project.

# Example if both the toolchain and this project are cloned under ~/src
ln -s ~/src/toolchain/toolchains/llvm-13 ~/src/jsix/sysroot

Building and running jsix

Once the toolchain has been set up, running the ./configure script (see ./configure --help for available options) will set up the build configuration, and ninja -C build (or wherever you put the build directory) will actually run the build. If you have qemu-system-x86_64 installed, the qemu.sh script will to run jsix in QEMU -nographic mode.

I personally run this either from a real debian amd64 bullseye machine or a windows WSL debian bullseye installation. Your mileage may vary with other setups and distros.

Running the test suite

jsix now has the test_runner userspace program that runs various automated tests. It is not included in the default build, but if you use the test.yml manifest it will be built, and can be run with the test.sh script or the qemu.sh script.

./configure --manifest=assets/manifests/test.yml
if ./test.sh; then echo "All tests passed!"; else echo "Failed."; fi