littlekernel / lk

LK embedded kernel
MIT License
3.12k stars 613 forks source link

Debugger (gdb/etc) support? #251

Open benvanik opened 4 years ago

benvanik commented 4 years ago

Is there any debugging support in/on top of LK beyond UART? I noticed some debugging related functionality in zircon (hw breakpoints/watchpoints, reg get/set, etc) but can't find anything similar here. Is that of interest to LK or is it intentionally omitted?

(also, is there a discussion group/etc that questions like this would be better directed to?)

swetland commented 4 years ago

I have a home grown thing called mdebug that I use for flashing and basic debug of Cortex-M MCUs when doing lk or bare metal hacking: https://github.com/swetland/mdebug

It provides a simple command line interface to inspect/modify memory, registers, single step, erase/flash, etc, and a gdb bridge to hook gdb up (in which case with the right setup one can do multi-threaded lk debug in gdb -- a coworker of mine a few years back used this with Eclipse).

A google group mailing list thing for discussion sounds like a good idea -- github issues are kinda clunky -- @travisg, thoughts?

zinahe commented 4 years ago

I saw this really cool idea a few weeks ago and bookmarked it just in case. Not sure how easy or difficult it is to make it do what you are looking for.

ghost commented 2 years ago

Successfully debugged the littlekernel using vscode, qemu, and the gnu riscv64 toolchain. I can access the registers as can be seen in the terminal, however info registers doesn't work. I think I am in the wrong environment for that, it thinks I'm querying a variable. Probably because I am new to C/C++ and Qemu. image Edit: figured out how to show register info. image image image

Not sure if that is helpful. However, if you are still interested in how I did debugging with vscode, I downloaded the riscv64 toolchain from the releases at https://github.com/riscv-collab/riscv-gnu-toolchain/releases (because the ones provided by apt don't have a gdb with them for some reason). See https://stackoverflow.com/questions/68611071/how-to-install-riscv64-gdb . I don't know why gdb isn't there for Ubuntu. In any case, I set up the build to use that toolchain, by editing the lk_inc.mk.example, moving it outside the root directory (where it is expected). added the path to the toolchain (if you get one of the releases, it should be under riscv/bin) under the TOOLCHAIN_PREFIX variable in that file. image

Then you edit the./scripts/doqemuriscv -6 to build for riscv64. I did a hack by going into that script and adding the debug flags image When you run the doqemuriscv script it should run qemu and qemu should wait for gdb to attach. The launch should arguments should be something like this. image To launch gdb with vscode, go into vscode and set up the debug file launch.json image I may have forgotten something because I spent a lot of time trying to figure out how to debug the kernel lol.

Two things I don't understand yet about the kernel. Where in the doqemuriscv script does qemu know to listen on port 1234, and where can we change the build file to not get those "optimized outs" when we try to see variable names. image

ghost commented 2 years ago

Also here's plain gdb with tui. I don't understand this code down there. I am surprised gdb managed to find assembler code. image I mean that comment which mentions the cpu lottery. Shouldn't there by default be a cpu with mhartid 0? Why does there need to be a 'lottery' to pick which cpu is zero? I am getting my information from https://five-embeddev.com/riscv-isa-manual/latest/machine.html#hart-id-register-mhartid We can see here that lla actually is two instructions, auipc and addi

travisg commented 2 years ago

In the case of many of the riscv socs that are either being emulated or exist, actually there is no guarantee that a) the first cpu enters the kernel is HART 0, and b) that only one cpu enters at a time and c) cpu HART ids start with 0.

In the case of a lot of the sifive socs, all three are the case.

Thats why I had to build this fairly complicated scheme of mapping cpu id to hart id and just assigning them as they come in. Linux seems to do the same thing on the same hardware. If you boot multiple times in a row you'll get different physical cores assigned cpu numbers.

It's a weird quirk of riscv platforms currently, and definitely threw a wrench in things for a bit until I got it figured out.