Marus / cortex-debug

Visual Studio Code extension for enhancing debug capabilities for Cortex-M Microcontrollers
MIT License
975 stars 236 forks source link

WSL2 support for in Cortex-Debug. Discussion and Strategy #467

Open haneefdm opened 3 years ago

haneefdm commented 3 years ago

WSL2 is next on my list. I am trying to set up a Windows machine (my previous WSL2 seems like it got corrupted). I would like people to subscribe to this Issue and comment and help test it. There are several issues and workarounds already related to this and I would like to consolidate the discussion here.

First, comments are welcome on how it should work.

haneefdm commented 3 years ago

References: Issues #451, #402, #361, #66 and PR #328. There may be more.

s13n commented 3 years ago

Just my $0.02:

I would want to be able to work in either of those two setups:

  1. Building takes place in the WSL2 subsystem directly. Here, the cross tools (arm-none-eabi-gdb etc.) are installed in the WSL2 subsystem. Since there doesn't seem to be any USB passthrough into WSL2, the debugger, which runs inside the WSL2, needs to establish contact with the probe that is connected to the Windows host using networking.

  2. Building takes place in a docker container running within the WSL2 subsystem. You would typically be running Docker Desktop, which on Windows offers support for WSL2. In a nutshell, a Linux container with the cross tools runs within the WSL2 subsystem, and VScode runs on Windows in Dev Container mode. Again, since gdb runs inside the container, the network must be used to contact the debug probe.

The second approach may seem more complex, but it offers the advantage that the tool environment for a project can be shrink-wrapped in a ready-made container that is described concisely with a Dockerfile that can be maintained and versioned in its own git repository. Multiple such containers can be supported in parallel for different projects, without them getting in each other's way.

The problem of how GDB talks to the probe is a multifaceted one, that depends a lot on the actual probe. It will have to be a network connection of some sort. Some example scenarios are:

  1. You have a probe with a network port, for example a SEGGER JLink PRO. In this case, there's probably no difference in the communication setup compared to the non-WSL2 case. With SEGGER, you install the Linux version of the SEGGER JLink software in the Dev Container (second case above) or WSL2 (first case above). No special software is needed on the Windows host.

  2. You have a SEGGER JLink connected to the Windows host via USB. In this case you can use the JLink remote server included in the SEGGER JLink software installation. Thus, you have to install the Windows version on your host. You only need to run the remote server on the host. Within WSL2 or within the Dev Container, you need the Linux version of the SEGGER software. The JLink GDB server is run in WSL2 or the Dev Container, respectively, and it is configured to connect to the remote server via IP. See the SEGGER docs for that. This scenario can also be used with a SEGGER probe connected to a different computer on the network. If your 3rd party probe can be reflashed to act as a JLink probe, which includes numerous probes integrated on an evaluation board, this should work, too.

  3. You run the GDB server on the Windows host, and have GDB inside WSL2 or the Dev Container contact this GDB server using IP. You might have to run the GDB server manually, unless a way can be found to start it from within the Dev Container. This should work even without a JLink based probe. In WSL2 or the Dev Container, no special probe-related software should be needed.

The first step would be to offer information on how to configure those scenarios properly, i.e. to describe what should work or what shouldn't. In this step it would be acceptable having to start some server manually, instead of having everything happening automatically when the debug button is pressed. It would also be acceptable to enter IP addresses or other settings manually depending on the local setup.

The second step would be to automate this stuff as much as possible.

Does this help you, @haneefdm ?

haneefdm commented 3 years ago

Does this help you, @haneefdm ?

@s13n Oh, helps a LOT, and thank you for such detail. And, a lot to think about over the weekend.

Would you agree with the following?

  1. Hard constraint: gdb should run wherever the source is compiled and ideally, this would be in WSL2/Linux environment. Or else path-names in the ELF file will be messed up. Messy, but can be corrected with gdb source-paths.
  2. Hard constraint: the gdb-server should run wherever the HW/probe is. This, to accommodate all (most) gdb-servers. J-Link PRO's remote-server is an exception but in truth, the real probe SW is running where the probe is right?
s13n commented 3 years ago

I would be able to live with constraint 1, but perhaps others will disagree.

I think constraint 2 is a bit too restrictive. I would say GDB server should run either where the probe is, or where GDB is. Supporting a third location would be overkill. This also supports the JLink remote server. One instance where it matters is when you use a JLink with network interface, for example the JLink Pro. You want to run the JLink GDB server where the GDB is, otherwise you end up installing the SEGGER software on both the host and in the dev container.

Of course, with a USB connected probe, you have to install the respective software on the host.

haneefdm commented 3 years ago

I added Constraint 2 because of the physical interface (USB) the HW connects to. This is more of a constraint to me as I have to find a solution that works in that way. And, I am not saying supporting a 3rd location at all -- sorry, if my words implied that. To summarize, the gdb-server and the HW under debug are always on the same machine, attached at the hip, so to say.

It is also a generalization, beyond WIndows+WSL2.

s13n commented 3 years ago

Well, USB isn't the only physical interface that is supported by probes, it is only the most common one (by far). Why would you restrict yourself like that? What do you gain?

haneefdm commented 3 years ago

I am not restricting myself to it. It is a reality that I am stating. Once the server is not local, then it is remote. There is no in-between and I don't see WSL as an in-between thing. Maybe this is where I am wrong. I really don't care how the gdb-server connects to the HW.

We got the following scenarios

  1. gdb-server runs where VSCode/gdb runs. Great, we already do that
  2. gdb-server runs on another machine. Okay, not great. TCP port selection, launching of the server is not automatic and until recently, SWO was not supported. This works 100% but, people are not happy with this.
  3. gdb-server cannot run locally in some cases which is where the WSL situation comes in but it degenerates to scenario 2 above.

It is 2/3 that we are trying to address to make it look almost like # 1. Note that we NEVER talk to the gdb-server which is why we don't care where it lives. 90+% of the time TCP ports are used and sometimes serial ports. But someone has to launch it. We never had to worry if the gdb-server is using USB or some other communication mechanism.

The reason GDB is run where the compiling is done is because of pathnames embedded in the elf file. If these are not right, then breakpoints don't work, stack traces will not have references back to source code, etc.

One thing we did not talk about is where is VSCode running? In my head, gdb and VSCode are running on the same machine. One reason is that all communication with GDB happens over stdio, while gdb itself may talk to the gdb-server over some connection (local or remote). Another is that this is the model GDB has chosen and has worked for over 3 decades.

Btw, I the MB failed on the PC where I used to have WSL installed. It was my only PC. I ordered a PC and waiting. I normally do my testing in a VM but here that gets convoluted. Mac running Win in VM which in turn hosting WSL2. So, no experiments until next week.

s13n commented 3 years ago

Have a look at https://code.visualstudio.com/docs/remote/remote-overview for some overview of how VS Code is used in remote mode, which includes WSL2.

We are talking about a scenario where VS Code runs on Windows in remote mode. The remote OS is either the WSL2 subsystem directly, or a docker container running within WSL2. The VS Code Server runs there, the source code resides there, and the cross tools including GDB run there.

The WSL2 case has the feature that you can launch Windows software from within the WSL2 subsystem. AFAIK you can't do this from within a docker container, but maybe the fact that VS Code runs on Windows provides a way to start some process there, even though the actual debugging takes place in the remote OS under control of the VS Code Server. I have no idea if VS Code helps you with that.

I am relatively new with the dev container way of working with VS Code, but I already prefer working in this way, due to the simplicity of maintaining a build environment that is specific to a project. You can also run the same container on a remote machine, for example your build server, rather than locally within WSL2, and you shouldn't notice much of a difference.

haneefdm commented 3 years ago

Have a look at https://code.visualstudio.com/docs/remote/remote-overview for some overview of how VS Code is used in remote mode, which includes WSL2.

Okay, that is a very different model. I was aware of that. I will look into it while I wait, but if you see the repo for it, it has 802 issues, no commits since 5/11/2021. It is a package containing 3 extensions. Last commit was 4 months ago in those repos.

Some of it is the inverse model of what I was thinking.

One thing that is important to me (selfishly) is how to debug this extension itself. Without that it would be horrible.

I worked a bit on the MS C++ debug adapter and it was very difficult to do any cross-platform debugging of the extension. It was like a one-man circus show to juggle multiple VSCodes running on different machines. Both VSCode and Visual Studio were needed. I can't even explain.

jdswensen commented 2 years ago

@haneefdm, I think @s13n is leading you down the correct path here. I'm not familiar enough with VS Code's internal workings to tell you what is technically possible to solve the problem, but I can offer my use case and viewpoint.

I'm currently working on setting up a development environment at work for using Zephyr. Our build pipeline will be all Linux based tools, but our corporate issued computers are all Windows machines. The setup differences for Zephyr between Windows and Unix based systems is painful. Ideally, I could just define a Docker image that has all the necessary build and dev tools in it and use it everywhere instead of depending on other devs to properly set up a bunch of prerequisite software. If I need to have them install a couple things like probe drivers, I can live with that. It's better than maintaining documentation on installing an entire dev environment and manually setting path variables in a locked down Windows machine.

Something to consider is that even though WSL2 and Windows are running on the same physical machine, for the purpose of this issue they are effectively two different systems. WSL2 is a lightweight VM and if it properly supported USB passthrough the technical implementation might not be so complex. However, WSL2 does not currently support USB passthrough.

Microsofts docs on Remote Development and Codespaces might help explain remote workspaces better.

wbober commented 2 years ago

Good input here!

Indeed, the configuration I'm interested in is:

As @haneefdm mentioned you can debug from WSL2 even today but the limitations are:

This could be easily solved since, as @s13n mentions, you can start any windows process from within WSL2. From what I have checked this is a modification in the WSL2 kernel - when you try to execute a binary with *.exe the relevant syscall is captured and passed to the host. This mean that we can launch the gdb server on the host and configure the gdb to connect to the correct remote (host) port.

The whole thing is essentially a workaround on the lack of USB passthrough that won't come soon, if ever. Since this can't be done with Docker perhaps we should tackle these issues separately?

haneefdm commented 2 years ago

Thank you all. My new PC is finally here. Setting it up right now and then I will be able to try stuff out myself on what works well.

I am sure I can figure it out but do you know, how the client (docker or WSL) env can know what the host IP is? I was told to look in /etc/resolve.conf but that doesn't look right at least for WSL2. Especially, if the client is in bridged mode or the host is using VPN

s13n commented 2 years ago

When using Docker Desktop for Windows, you can have the host's IP address resolved from within the container by using host.docker.internal as the DNS name.

See https://docs.docker.com/desktop/windows/networking/

haneefdm commented 2 years ago

Oh, that is super nice with Docker. Thanks @s13n

wbober commented 2 years ago

@haneefdm I think looking into /etc/resolv.conf is the correct way. It does work for me:

image

haneefdm commented 2 years ago

@wobe, Thanks. Does that work if the host Windows is using VPN. Don't you see many entries in the /etc/resolve.conf. The solution I am thinking may not need to know the host-ip at all. VSCode may help in this regard.

wbober commented 2 years ago

@wobe, Thanks. Does that work if the host Windows is using VPN. Don't you see many entries in the /etc/resolve.conf. The solution I am thinking may not need to know the host-ip at all. VSCode may help in this regard.

Don't see much of a difference, i.e., I have the same contents in the resolv.conf file.

GitMensch commented 2 years ago
1. Hard constraint: gdb should run wherever the source is compiled and ideally, this would be in WSL2/Linux environment. Or else path-names in the ELF file will be messed up. Messy, but can be corrected with [gdb source-paths](https://sourceware.org/gdb/current/onlinedocs/gdb/Source-Path.html).

I "vote" against that constraint. Just allow setting source-path directly in the launch configuration and everything is fine.

haneefdm commented 2 years ago

Technically it is not a hard constraint for me. Of course, you can use source paths just like you can today. It has to do with the client-server VSCode architecture. This extension and the GDB are attached at the hip. That is the true hard constraint for me.

image

https://code.visualstudio.com/docs/remote/remote-overview.

As things stand, in one incarnation, Cortex-Debug would be classified in the "Remote OS" box (WSL, Docker, etc.) while the GUI itself would be in the "Local OS" box. That picture is not totally applicable to what we are doing, btw. Especially for where the 'Application is running'. You can also see where the Source code box is.

That little green box on the bottom right of the picture above is GDB.

I don't even know if that arch. is feasible for me but it is a start as a lot of groundwork has already been laid out.

GitMensch commented 2 years ago

That only applies if your use VS Code Server - and as those binaries are non-free I don't use that. That's likely the reason why I commonly think of "Local OS with source and GDB", attached to "GDB Server with the process". This "simple" scenario also works quite fine since years for most setups. Note: when running on Windows MSYS2 provides a gdb-multiarch.exe - so the "Debugger" part is solved (objdump not).

haneefdm commented 2 years ago

and as those binaries are non-free I don't use that.

Which binaries? The VSCode server(s)? And free as in Open Source or some other meaning (costs)?

In VSCode's mind 'Local OS' means the host running the UI. To me, 'Local OS' also meant "Local OS with source and GDB" but I had to teach myself a different way of thinking. VSCode is my host so, playing by the host's rules and thus terminology. I have to back and edit all my comments to make sure.

GitMensch commented 2 years ago

Yes, the vscode servers. It is all about freedom, not price. This actually leads to vscode server not running everywhere I'd like it to for remote debugging (I'm not sure if it actually works in every distro one can install in wsl). ... actually the client part in vscode is also closed source and the "gratis" extensions needed to work for that: are only licensed to be used on "Visual Studio Code binaries" (= the one provided by MS where you are even not allowed to distribute own copies.... - and even from a practical view: those binaries are not available for all GNU/Linux distros where vscode actually runs - which is the reason that I only use binaries that are as free as vscode [the main source], nowadays mainly VSCodium).

GitMensch commented 2 years ago

Note: in VSCode's mind "Local OS" is also something that run's vscode in a browser: it is actually the UI.

TorBlackbeard commented 2 years ago

My team and I have been using the mentioned setup for ½ year now, and it's great. ( vscode running in windows + 'remote' plugin. compiler + jlinkgdbserver running on linux side) We use a Docker-setup, that uses the excact docker-container, that the buildserver uses (test what you fly, fly what you test) We also use a Ubuntu-based WSL2 setup, with compilers manually installed. (This is mainly a question about ergonomics about mapped drives, network firewalling by McAfee, etc )

I'm on Segger JLink tools, so I can run the segger-jlink-gdb-server in linux, and I specify an ip (on my windows host) where I have a Jlink-remote-server running. This is NOT the same as running the gdb-server on windows.

        "type": "cortex-debug",
        "servertype": "jlink",
        "serverpath": "JLinkGDBServerCLExe", <--- this *is* the linux executable
        "ipAddress": "172.20.15.135", <-- connects to "jlink-remote-server" on this machine

Annoyances: I have to write the IP directly. I cant write "host.docker.internal". Tried some ways to get it indirectly via something like "dig +short host.docker.internal" but so far no luck. ${env:HOST_IP} works, and I dont mind doing export HOST_IP=$(dig +short host.docker.internal) in a terminal when my IP changes on the windows side, but that environment does not affect the VSCode environment, so it does not work. Any ideas greatly appreciated.

This is really minor: On Ubuntu-based WSL2 (the easiest to install , because it's in official Windows Store), the gdb debugger is called arm-none-eabi-gdb as expected by cortex-debug. On fedora (company default for docker-containers for some reason) one installs gdb-multiarch, and the command is just 'gdb' so I need a "gdbPath": "gdb". I hope fedora and debian converges at some point, so I can get rid of this difference. (Some team-members makes a symlink for arm-none-eabi-gdb, others live with a dirty file in git.)

/T

haneefdm commented 2 years ago

@andyinno Can you try the tools from the command line and use gdb from the command line as well. You see the exact command-line options used in the Debug Console.

Btw, I have to remove your comment as this thread is not for issue submissions or asking for help. Please re-open a new issue and someone might come along and help you. Once you submit a new issue, I will remove your comment from here.

If you want to tell us how to implement the remote/WSL debug then this is the right place. You are doing something this tool was not designed for -- if it works, great.

lagerholm commented 2 years ago

Maybe this: https://www.xda-developers.com/wsl-connect-usb-devices-windows-11/ https://www.elevenforum.com/t/connecting-usb-devices-to-wsl.2514/ will change some of the scope. Not tested yet so I haven't verified that it actually works to have the probes connected directly to WSL2 with USB.

haneefdm commented 2 years ago

@lagerholm Thank you so much for the info. This is great.

DanieleNardi commented 2 years ago

Hello, I tried to use Cortex-Debug in WSL2 using the usbipd-win tool to connect host-connected J-Link to WSL2 environment. J-Link connection looks fine (at first look, it seems it's required Jlink drivers to be installed on both host and wsl2 sides and aligned to the same version), but extension doesn't stop at main. launcher.json looks like the following:

"name": "DVC TopRow Emerald Inventory",
"type": "cortex-debug",
"request": "launch",
"cwd": "${workspaceFolder}",
"armToolchainPath": "/opt/gcc-arm-none-eabi/bin",
"executable": "path/to/elf",
"serverpath": "/opt/SEGGER/JLink/JLinkGDBServerCLExe",
"servertype": "jlink",
"device": "MK10DX256xxx7",
"interface": "jtag",
"serialNumber": "proper serial number",
"runToMain": true,
"stopAtEntry": true,
"svdFile": "path/to/svd"
TorBlackbeard commented 2 years ago

(at first look, it seems it's required Jlink drivers to be installed on both host and wsl2 sides and aligned to the same version

That makes me suspicious. Are you sure you don't get one of the windows-side tools called by accident? If the USB is properly visible in the WSL, it should work 100% on the linux toolchain? (all imho, of cause)

Would like to test it out soon!

DanieleNardi commented 2 years ago

That makes me suspicious. Are you sure you don't get one of the windows-side tools called by accident?

I thought the same, I checked, but it wasn't.

If the USB is properly visible in the WSL, it should work 100% on the linux toolchain? (all imho, of cause)

Same, but actually didn't. Can't say why.

GitMensch commented 2 years ago

Same, but actually didn't. Can't say why.

Are you sure that this is WSL2? If not please check and if necessary convert, then check again.

DanieleNardi commented 2 years ago

I checked with usbipd-win team and the communication is fine.

JojoS62 commented 2 years ago

are you using Windows 11 already? There WSL has more new features that are not available in Win 10.

DanieleNardi commented 2 years ago

are you using Windows 11 already? There WSL has more new features that are not available in Win 10.

No, still Win10. It's a company PC, I don't know when (if) we'll get the update.

wbober commented 2 years ago

I gave the usbipd a go. I'm on Windows 10 and did a simple test: flash a sample program on nRF52840 DK that outputs logs through a JLINK VCOM. Here is the result:

Putty on Windows: image

Picocom on WSl2 through usbipd: image

As you see there are some errors in the log output on WSL2. I suspect this might related to the issue @DanieleNardi has. If there are communication errors Jlink might not work properly.

@TorBlackbeard a question to your setup. To my understanding you need to start the remote jlink server on host manually, right?

lzptr commented 2 years ago

It's great to see that WSL2 support is beeing pushed.

I'm interested in the following setup:

I managed to get a working debugging session using WSL2, USB Passthrough with USB-IP and JLink installed in WSL2. It worked under Windows 10 (21H2) and Windows 11. I had some problems with the udev service not running on WSL2, maybe that's also your problem. Issuing the following commands before attaching the USB device to WSL2 solved my problems:

sudo service udev restart
sudo udevadm control --reload

You also need to have the jlink udev rules copied to /udev/rules.d.

I documented the process for my EFR32 board here: https://github.com/RoboGnome/efr32_base

GitMensch commented 2 years ago

I managed to get a working debugging session using WSL2, USB Passthrough with USB-IP and JLink installed in WSL2.

Out of interest: Did you use any proprietary Microsoft extensions or "servers" (installed in WSL or in vscode) or was this just your setup noted above (which distro btw?) plus this extension?

lzptr commented 2 years ago

I have additionally installed the usbipd-win server from: https://github.com/dorssel/usbipd-win/releases

But that's open source. The steps to get it working are documented in the efr32_base project I have linked above.

My current setup is:

wbober commented 2 years ago

@RoboGnome I have the same versions but on Windows 10. The UART doesn't work reliably for me, unfortunately. I'm pretty sure that it's not udev issue.

lzptr commented 2 years ago

@wbober Well maybe it's dependend on the device or usb port used? But I'm no expert on USB.

The experience for me is stellar and it's exactly what I was missing in WSL2 to do all my embedded development on Windows+WSL2. So thanks to anyone who made it possible.

But I only tested it my Thunderboard Sense 2 and only with a CLI from a example. So I didn't printed a lot of data to see if the console can't keep up. But while testing it, I didn't notice any lag. The board comes with a JLink CDC UART and after I attach it to my WSL2 I could connect to it using screen /dev/ttyACM0 115200. And it worked while I was debugging. Which is neat, because now I finally have a integrated serial console while debugging.
But I hope that the timing issues get fixed in the future.

askariz commented 2 years ago

WSL2 is next on my list. I am trying to set up a Windows machine (my previous WSL2 seems like it got corrupted). I would like people to subscribe to this Issue and comment and help test it. There are several issues and workarounds already related to this and I would like to consolidate the discussion here.

First, comments are welcome on how it should work.

Window allow to execute openocd.exe on WSL2 which to access STLINK hardware device on Windows Here is my command to access stm device on WSL2 and everything go right

` /mnt/c/openocd/bin/openocd.exe -c "gdb_port 50000" -c "tcl_port 50001" -c "telnet_port 50002" -s /mnt/c/Users/Administrator/Desktop/work/ODrive/Firmware -f interface/stlink.cfg -f target/stm32f7x.cfg -c "reset_config none separate" Open On-Chip Debugger 0.11.0+dev-00572-g94e7535be (2022-02-19-14:36) Licensed under GNU GPL v2 For bug reports, read http://openocd.org/doc/doxygen/bugs.html Info : auto-selecting first available session transport "hla_swd". To override use 'transport select '. Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD Info : DEPRECATED target event trace-config; use TPIU events {pre,post}-{enable,disable} none separate

Info : Listening on port 50001 for tcl connections Info : Listening on port 50002 for telnet connections Info : clock speed 2000 kHz Info : STLINK V2J29M18 (API v2) VID:PID 0483:374B Info : Target voltage: 3.224258 Info : stm32f7x.cpu: Cortex-M7 r1p1 processor detected
Info : stm32f7x.cpu: target has 8 breakpoints, 4 watchpoints Info : starting gdb server for stm32f7x.cpu on 50000 Info : Listening on port 50000 for gdb connections `

But when I launch with VScode with launch config ` { // For the Cortex-Debug extension "type": "cortex-debug", "servertype": "openocd", "request": "attach", "name": "Debug ODrive v4.x - ST-Link", "executable": "${workspaceRoot}/build/ODriveFirmware.elf", "searchDir": ["/home/askariz/.vscode-server/extensions/marus25.cortex-debug-1.2.2/support/"], "configFiles": [ "interface/stlink.cfg", "target/stm32f7x.cfg", "openocd-helpers.tcl" ],

        "showDevDebugOutput": "both",
        "openOCDLaunchCommands": [
            "reset_config none separate"
        ],
        "svdFile": "${workspaceRoot}/Private/v4/STM32F722.svd",
        "cwd": "${workspaceRoot}"
    },

It automatically add /mnt/c/openocd/bin/openocd.exe -c "gdb_port 50000" -c "tcl_port 50001" -c "telnet_port 50002" -s /home/askariz/.vscode-server/extensions/marus25.cortex-debug-1.2.2/support/ -f /home/askariz/.vscode-server/extensions/marus25.cortex-debug-1.2.2/support/openocd-helpers.tcl -f interface/stlink.cfg -f target/stm32f7x.cfg -f openocd-helpers.tcl -c "reset_config none separate" Open On-Chip Debugger 0.11.0+dev-00572-g94e7535be (2022-02-19-14:36) Licensed under GNU GPL v2 For bug reports, read http://openocd.org/doc/doxygen/bugs.html embedded:startup.tcl:26: Error: Can't find /home/askariz/.vscode-server/extensions/marus25.cortex-debug-1.2.2/support/openocd-helpers.tcl in procedure 'script' at file "embedded:startup.tcl", line 26 [2022-02-20T03:41:55.658Z] SERVER CONSOLE DEBUG: onBackendConnect: gdb-server session closed GDB server session ended. This terminal will be reused, waiting for next session to start... `

which cause I can't find Can't find /home/askariz/.vscode-server/extensions/marus25.cortex-debug-1.2.2/support/openocd-helpers.tcl But This File Truely exsist on my WSL2...

Any suggestion ?

mpekurin commented 2 years ago

@askariz you need to use "servertype": "external" to access the server that's running on Windows side from WSL2:

"servertype": "external",
"gdbTarget": "hostname.mshome.net:port"

where hostname is the name of your PC. For this to work you need to add the firewall rule to allow connection from WSL2 if you haven't. Run this in powershell:

New-NetFirewallRule -DisplayName "WSL" -Direction Inbound -InterfaceAlias "vEthernet (WSL)" -Action Allow
GitMensch commented 2 years ago

This may be useful for both the WSL scenario and for external debugging where the source path on the client (vscode) is not the same as the debugger (gdbserver, possibly running on wsl) sees:

I suggest to add a sourceFileMap setting. This would need to applied anywhere the client passes paths (breakpoints, "go to cursor") and when the client receives paths (I guess that's the stack only).

Should that be moved out to a separate issue?

haneefdm commented 2 years ago

People are using their own gdb-init scripts to add source maps. No help from extension needed. The bigger problem is how breakpoints are set and how stack traces are interpreted. Disassembly is affected as well.. How we look for static variables is also affected -- as in we have to implement what gdb does. VSCode manages breakpoints with paths (file/line) that it sees. It wasn't clear to me how gdb handled file/line breakpoints and if clients (us) have to do the reverse mapping. Yes, a separate issue would be good but also with details of what it all means. In fact, it is more useful in a non WSL setting.

cpptools/cppdbg added support for source maps. We can check to see what all they do with that info.

AaronFontaine-DojoFive commented 1 year ago

I've recently been trying to solve the same for a client project. I read this entire thread and did some experimentation and here's what I came up with.

In my .bashrc file in WSL, I added the following line:

export HOST_IP=`ip route|awk '/^default/{print $3}'`

In the launch.json file for VS Code, I added the following to serverArgs:

"-select", "ip=${env:HOST_IP}"

Then, I just need to make sure the JLink Remote Server is started in Windows and running in LAN mode before launching a debug session. This could probably also be triggered from launch.json. I need to look more into that. The JLink Remote Server does not let me select the network interface when running in LAN mode and for some reason always picks a VMWare network interface (192.168.253.1). Somehow this works, even though the JLink GDB Server running in Ubuntu is connecting to 172.28.208.1, which is nice because I don't have to manually copy any IP addresses. It implies the whole thing may be automatable.

Note, that HOST_IP here refers to the Windows host (what VS Code calls the "Local OS" when running a remote connection).

AaronFontaine-DojoFive commented 1 year ago

As far as hard constraint 1 above

gdb should run wherever the source is compiled and ideally, this would be in WSL2/Linux environment

I haven't seen any discussion yet about -fdebug-prefix-map. I have had good success with this moving ELF files between separate build and debug environments. This may not be a good all around solution though as it is gcc-specific.

lzptr commented 1 year ago

@aaronf-at-d5 I also tried to use it through bot the Remote Server and USB-IP approach. You can add a prelaunch task to your launch.json configuration to start the Jlink server. Here's my example on it:

sullivanmj commented 1 year ago

It may be possible to attach USB-based debuggers to WSL via cortex-debug, that does not require any of the following:

  1. Use of USB-IP
  2. Pre-launch/post-launch tasks to start remote GDB server
  3. Running of other remote debugging assistant applications such as J-Link Remote Server

My thought is to leverage the fact that Windows executables, when invoked from WSL, execute within the context of Windows.

This means that we can invoke a GDB server on the Windows host using the cortex-debug serverPath and perhaps serverArgs (to allow remote connections, etc) debug attributes.

Then the GDB client could be invoked from within WSL and attached per usual, but with some of the tricks listed above to use the correct target IP address and port - either using something like host.docker.internal:50000 or hostname.mshome.net:50000, or something like ${env:HOST_IP}:50000.

Debugging would commence as usual, and any other clients of the GDB server that you wish to attach from Windows could also operate normally. Then at the end of the debug session, the GDB connection would be cleanly closed as usual from the GDB client.

To achieve all of this, I was hoping to find a way to use both the serverPath and gdbTarget debug attributes, but it seems that gdbTarget is only used when the servertype is external, in which case, the serverPath is not used. This is an understandable behavior given the context of normal GDB client/server connections, but it prevents the approach I'm describing from being used.

I'm thinking that the simplest way to work around that limitation, would be the creation of an overrideInitCommands debug attribute. This attribute would allow manual specification of how the connection from the GDB client to the server should be initialized from within launch.json. I have a branch that attempts this functionality located here. Presumably, this approach could be leveraged in a variety of other use-cases, too. It would be used in the launch.json like so:

"overrideInitCommands":"target remote ${env:HOST_IP}:50000"

Am I off my rocker? Is there a better way of achieving what I've described above with the existing functionality of cortex-debug, or should I open a PR?

AaronFontaine-DojoFive commented 1 year ago

@sullivanmj, I like where you're going with this line of thinking. The reason using preLaunchTask to forward to tasks.json works is because a lot of the Windows-native paths are automatically mapped into WSL with the path renaming necessary to point into /mnt/c/. This allows us to just invoke JLinkGDBServerCL.exe and let Windows hooks take care of it. The process is clunky though and fails more easily. It would be nice to tell the Cortex Debug extension directly in launch.json, that we're using the Windows executable. You would still have to provide the -nolocalhostonly in serverArgs and use the ${env:HOST_IP}:2331 trick.

sullivanmj commented 1 year ago

@aaronf-at-d5 Good point about the path being mapped into WSL. I agree, it would be great to have this way of doing it natively supported by Cortex Debug. I now that I wrote that post, it appears that I didn't actually discover this first, @askariz appears to have done this with openocd. But it looks like they were invoking that separately from the launch.json.