metalbear-co / mirrord

Connect your local process and your cloud environment, and run local code in cloud conditions.
https://mirrord.dev
MIT License
3.74k stars 101 forks source link

VS Code simple golang example not working #834

Closed MidasLamb closed 1 year ago

MidasLamb commented 1 year ago

Bug Description

I've installed the mirrord vs code extension, running the following snippet allows me to hit a breakpoint set on the "Hello, world." line: package main

import (
    "fmt"
)

func main() {
    fmt.Println("Hello, world.")
}

However, once I make it a HTTP server:

package main

import (
    "fmt"
    "io"
    "net/http"
)

func main() {
    fmt.Println("Hello, world.")

    err := http.ListenAndServe(":80", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        fmt.Printf("got / request\n")
        io.WriteString(w, "This is my website!\n")
    }))
    fmt.Printf("%s", err.Error())
}

I never hit the breakpoint on the "Hello world" line, but I get this in VS Code's debug console:

DAP server listening at: 127.0.0.1:42015
Type 'dlv help' for list of commands.
Process 169465 has exited with status -11
Detaching
dlv dap (169309) exited with code: 0

Steps to Reproduce

  1. Install mirrord VS Code extension
  2. Enable mirrord in VS Code
  3. go mod init and make main.go with following content:
    
    package main

import ( "fmt" "io" "net/http" )

func main() { fmt.Println("Hello, world.")

err := http.ListenAndServe(":80", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    fmt.Printf("got / request\n")
    io.WriteString(w, "This is my website!\n")
}))
fmt.Printf("%s", err.Error())

}

4. Debug it with the following launch config:
```json
{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Launch Package",
            "type": "go",
            "request": "launch",
            "mode": "auto",
            "program": "${workspaceFolder}"
        }
    ]
  1. Receive following output:
    Type 'dlv help' for list of commands.
    Process 169465 has exited with status -11
    Detaching
    dlv dap (169309) exited with code: 0

Backtrace

No response

Relevant Logs

No response

Your operating system and version

Arch Linux 6.0.11-arch1-1

Local process

dlv

Local process version

No response

Additional Info

No response

aviramha commented 1 year ago

Thanks for reporting this. What is your Go version?

MidasLamb commented 1 year ago
go version go1.19.3 linux/amd64
MidasLamb commented 1 year ago

and for dlv:

Delve Debugger
Version: 1.20.0
Build: $Id: 8ec46ee3d275c276b8e7465d69a23399e0e14789 $
aviramha commented 1 year ago

Hey @MidasLamb - I managed to reproduce it and also partially fix it. I don't want to deliver half-baked so this is blocked on other existing issues that we are working on to solve this in a proper manner. The relevant blocker is #780 . Leaving this open until then :)

MidasLamb commented 1 year ago

Hi @aviramha , thanks for the investigation! If I understand the referenced issue correctly, that means it should already work with the CLI? Or is there something else that would also need to be fixed in the CLI itself for this issue?

aviramha commented 1 year ago

Hi @aviramha , thanks for the investigation!

If I understand the referenced issue correctly, that means it should already work with the CLI? Or is there something else that would also need to be fixed in the CLI itself for this issue?

Sorry, the cli fix is pending the extension change (so we can have a "unified" approach). Would delve working with cli be useful for you? I can try to push that quite fast..

MidasLamb commented 1 year ago

I've looked into the VS Code extension just because it seems easiest to set up and get going, but normally I use neovim, so if there is a CLI fix then I'll just try to set up everything in neovim, so a CLI fix would be pretty useful already for me :)

aviramha commented 1 year ago

Hey! This should be fixed with https://github.com/metalbear-co/mirrord/releases/tag/3.14.0 - Please try it out and let us know if we can close this.

MidasLamb commented 1 year ago

@aviramha I've tried it with the VS Code extension again, I notice that when I steal the traffic it works as expected, but when mirroring, the agent seems to crash as soon as an HTTP request is sent, I get the following logs for the same simple example:

DAP server listening at: 127.0.0.1:36783
Type 'dlv help' for list of commands.
Hello, world.
2022-12-15T08:11:23.862809Z ERROR ThreadId(02) mirrord_kube::api: agent disconnected
2022-12-15T08:11:23.862916Z ERROR ThreadId(02) mirrord_layer: agent connection lost
mirrord has encountered an error and is now exiting.
thread 'tokio-runtime-worker' panicked at 'explicit panic', mirrord/layer/src/lib.rs:455:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Process 37581 has exited with status -15
Detaching
dlv dap (37415) exited with code: 0

You can see the "Hello, World" from the initial main, but then I open the browser to go the the site, it's a bit slow and then the debug session ends. Not sure if it's the same issue? I can close this one and open a new one if you want!

aviramha commented 1 year ago

Just to confirm, issue is only on mirroring, steal works well? Can you add this to your configuration:

"agent": {"ttl": 100}

and then fetch the agent logs? Thank you!

MidasLamb commented 1 year ago

On the simple example steal works well, mirroring does not. I just tried again and now it handled 1 request before it disconnected.

How do I fetch the agent logs? I see there is a command mirrord extract but that seems to be for local filesystem?

aviramha commented 1 year ago

Ah that's good observation, mirrord extract is.. well it was used for extension and will be removed soon (it just extracts the layer..)

You can just use kubectl logs after seeing the agent name using kubectl get pods (We plan to add easier log gathering mechanism.. but we don't have it right now, sorry)

MidasLamb commented 1 year ago

I tried to reproduce it today, and now the difference is not that the agent disconnects, but rather that when mirroring, the go code panics with:

runtime: g 28: unexpected return pc for net.(*conn).Read called from 0x7f5494eb4548
stack: frame={sp:0xc000076e78, fp:0xc000076f10} stack=[0xc000076000,0xc000077000)
0x000000c000076d78:  0x000000000040f6ee <runtime.(*mcache).nextFree+0x00000000000000ce>  0x0000000000000000 

But when stealing, everything is fine.

The agent seems fine as well: for mirroring:

2022-12-16T13:03:56.522522Z  INFO ThreadId(01) mirrord_agent: agent ready
2022-12-16T13:03:57.567311Z  WARN ThreadId(14) mirrord_agent::outgoing::udp: interceptor_task -> no messages left
2022-12-16T13:03:57.567396Z  WARN ThreadId(13) mirrord_agent::outgoing: interceptor_task -> no messages left
2022-12-16T13:04:23.001065Z  WARN ThreadId(20) mirrord_agent::outgoing: interceptor_task -> no messages left
2022-12-16T13:04:23.001486Z  WARN ThreadId(21) mirrord_agent::outgoing::udp: interceptor_task -> no messages left

for stealing:

2022-12-16T13:01:44.877298Z  INFO ThreadId(01) mirrord_agent: agent ready
2022-12-16T13:01:46.205931Z  WARN ThreadId(13) mirrord_agent::outgoing: interceptor_task -> no messages left
2022-12-16T13:01:46.205938Z  WARN ThreadId(14) mirrord_agent::outgoing::udp: interceptor_task -> no messages left
2022-12-16T13:02:10.722245Z  WARN ThreadId(21) mirrord_agent::outgoing::udp: interceptor_task -> no messages left
2022-12-16T13:02:10.722445Z  WARN ThreadId(20) mirrord_agent::outgoing: interceptor_task -> no messages left
2022-12-16T13:02:40.741610Z  INFO ThreadId(01) mirrord_agent: main -> mirrord-agent `start` exiting successfully.
aviramha commented 1 year ago

@MidasLamb Thanks! We're working on it and will let you know once we have progress.

MidasLamb commented 1 year ago

@aviramha , great thanks!

I've also tried mirrord on a more complex program now but that segfaults before entering main:

SIGSEGV: segmentation violation
PC=0x7f52d486057d m=0 sigcode=1

goroutine 1 [syscall, locked to thread]:
syscall.Syscall(0x48, 0x0, 0x3, 0x0)

Debugging locally works fine. Not sure if these two would be related or not?

aviramha commented 1 year ago

can you try 3.15.2? I think we fixed the issue with mirroring in #875

f-blass commented 1 year ago

For me it is also not working. I am using the GoLand plugin and a simpel fmt.Println("Hello, world.") example as described in the first post.

On reaching the breakpoint program just hangs without actually entering the debugger in the IDE. It is independent of the mirrord config (steal vs mirror).

The console output does not show anything:

API server listening at: 127.0.0.1:58564
debugserver-@(#)PROGRAM:LLDB  PROJECT:lldb-1400.0.38.17
 for arm64.
Got a connection, launched process /private/var/folders/lb/y5b0sgds5nv7z_vr4y9z68kc0000gn/T/GoLand/___1go_build_playground_http (pid = 96919).

I am using Plugin version 3.16.2

aviramha commented 1 year ago

For me it is also not working. I am using the GoLand plugin and a simpel fmt.Println("Hello, world.") example as described in the first post.

On reaching the breakpoint program just hangs without actually entering the debugger in the IDE. It is independent of the mirrord config (steal vs mirror).

The console output does not show anything:

API server listening at: 127.0.0.1:58564
debugserver-@(#)PROGRAM:LLDB  PROJECT:lldb-1400.0.38.17
 for arm64.
Got a connection, launched process /private/var/folders/lb/y5b0sgds5nv7z_vr4y9z68kc0000gn/T/GoLand/___1go_build_playground_http (pid = 96919).

I am using Plugin version 3.16.2

Very weird - I managed to reproduce this. Can you open a new issue for this as it's probably different bug than the one in this issue?

MidasLamb commented 1 year ago

can you try 3.15.2? I think we fixed the issue with mirroring in #875

Thanks for the work and sorry for the delay! I've tried with version 3.17.0 now, and the segfault still occurs for the complex program, both for mirror and steal.

The debug console log:

SIGSEGV: segmentation violation
PC=0x7faba5b532dd m=0 sigcode=1

goroutine 1 [syscall, locked to thread]:
syscall.Syscall(0x48, 0x0, 0x3, 0x0)
    /usr/lib/go/src/syscall/syscall_linux.go:68 +0x4b fp=0xc0000d59b8 sp=0xc0000d5940 pc=0x908eab
internal/syscall/unix.IsNonblock(0x0)
    /usr/lib/go/src/internal/syscall/unix/nonblocking.go:16 +0x45 fp=0xc0000d5a28 sp=0xc0000d59b8 pc=0x936f05
os.NewFile(0x0, {0x450fc52, 0xa})
    /usr/lib/go/src/os/file_unix.go:103 +0x4c fp=0xc0000d5a98 sp=0xc0000d5a28 pc=0x9511ac
os.init()
    /usr/lib/go/src/os/file.go:66 +0x1e8 fp=0xc0000d5ac0 sp=0xc0000d5a98 pc=0x958ac8
runtime.doInit(0x6021520)
    /usr/lib/go/src/runtime/proc.go:6329 +0x132 fp=0xc0000d5bf0 sp=0xc0000d5ac0 pc=0x850b92
runtime.doInit(0x601b740)
    /usr/lib/go/src/runtime/proc.go:6306 +0x79 fp=0xc0000d5d20 sp=0xc0000d5bf0 pc=0x850ad9
runtime.doInit(0x601e920)
    /usr/lib/go/src/runtime/proc.go:6306 +0x79 fp=0xc0000d5e50 sp=0xc0000d5d20 pc=0x850ad9
runtime.doInit(0x602dc20)
    /usr/lib/go/src/runtime/proc.go:6306 +0x79 fp=0xc0000d5f80 sp=0xc0000d5e50 pc=0x850ad9
runtime.main()
    /usr/lib/go/src/runtime/proc.go:233 +0x199 fp=0xc0000d5fe0 sp=0xc0000d5f80 pc=0x843739
runtime.goexit()
    /usr/lib/go/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000d5fe8 sp=0xc0000d5fe0 pc=0x876701

goroutine 2 [force gc (idle)]:
runtime.gopark(0x45fdca8, 0x6108ee0, 0x11, 0x14, 0x1)
    /usr/lib/go/src/runtime/proc.go:363 +0xfd fp=0xc000092f88 sp=0xc000092f58 pc=0x843b9d
runtime.goparkunlock(0x0?, 0x0?, 0x0?, 0x0?)
    /usr/lib/go/src/runtime/proc.go:369 +0x2a fp=0xc000092fb8 sp=0xc000092f88 pc=0x843c2a
runtime.forcegchelper()
    /usr/lib/go/src/runtime/proc.go:302 +0xa5 fp=0xc000092fe0 sp=0xc000092fb8 pc=0x8439c5
runtime.goexit()
    /usr/lib/go/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000092fe8 sp=0xc000092fe0 pc=0x876701
created by runtime.init.6
    /usr/lib/go/src/runtime/proc.go:290 +0x25

goroutine 3 [GC sweep wait]:
runtime.gopark(0x45fdca8, 0x6109d00, 0xc, 0x14, 0x1)
    /usr/lib/go/src/runtime/proc.go:363 +0xfd fp=0xc000093768 sp=0xc000093738 pc=0x843b9d
runtime.goparkunlock(0x0?, 0x0?, 0x0?, 0x0?)
    /usr/lib/go/src/runtime/proc.go:369 +0x2a fp=0xc000093798 sp=0xc000093768 pc=0x843c2a
runtime.bgsweep(0x0?)
    /usr/lib/go/src/runtime/mgcsweep.go:278 +0x98 fp=0xc0000937c8 sp=0xc000093798 pc=0x82b5d8
runtime.gcenable.func1()
    /usr/lib/go/src/runtime/mgc.go:178 +0x26 fp=0xc0000937e0 sp=0xc0000937c8 pc=0x81f986
runtime.goexit()
    /usr/lib/go/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000937e8 sp=0xc0000937e0 pc=0x876701
created by runtime.gcenable
    /usr/lib/go/src/runtime/mgc.go:178 +0x6b

goroutine 4 [GC scavenge wait]:
runtime.gopark(0x45fdca8, 0x610d980, 0xd, 0x14, 0x2)
    /usr/lib/go/src/runtime/proc.go:363 +0xfd fp=0xc000093f48 sp=0xc000093f18 pc=0x843b9d
runtime.goparkunlock(0x486ec08?, 0x1?, 0x0?, 0x0?)
    /usr/lib/go/src/runtime/proc.go:369 +0x2a fp=0xc000093f78 sp=0xc000093f48 pc=0x843c2a
runtime.(*scavengerState).park(0x610d980)
    /usr/lib/go/src/runtime/mgcscavenge.go:389 +0x4b fp=0xc000093fa0 sp=0xc000093f78 pc=0x82926b
runtime.bgscavenge(0x0?)
    /usr/lib/go/src/runtime/mgcscavenge.go:617 +0x45 fp=0xc000093fc8 sp=0xc000093fa0 pc=0x829845
runtime.gcenable.func2()
    /usr/lib/go/src/runtime/mgc.go:179 +0x26 fp=0xc000093fe0 sp=0xc000093fc8 pc=0x81f926
runtime.goexit()
    /usr/lib/go/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000093fe8 sp=0xc000093fe0 pc=0x876701
created by runtime.gcenable
    /usr/lib/go/src/runtime/mgc.go:179 +0xaa

rax    0xffffffffffffffda
rbx    0xffffffffffffffda
rcx    0x7faba5b53298
rdx    0x0
rdi    0x0
rsi    0xc0000d58f0
rbp    0xc0000d5930
rsp    0xc0000d58e8
r8     0x0
r9     0x7faba5b52f82
r10    0x0
r11    0x212
r12    0x48
r13    0x0
r14    0x0
r15    0x3
rip    0x7faba5b532dd
rflags 0x10202
cs     0x33
fs     0x0
gs     0x0
aviramha commented 1 year ago

Thanks for the update. Does this happen every time? if so, does it happen on the same step? Can you share what happens in that step?

MidasLamb commented 1 year ago

Every time I've tested today (around 10 times) it happens, I've got a breakpoint on the first statement in main, but the segfault occurs before I hit that breakpoint

aviramha commented 1 year ago

Every time I've tested today (around 10 times) it happens, I've got a breakpoint on the first statement in main, but the segfault occurs before I hit that breakpoint

Does it happen without debugger as well? (from cli for example?)

aviramha commented 1 year ago

Also, can you share the dependencies used in that project? if it happens before main, I assume some init logic in those modules trigger it, so it might be useful for us to use for reproducing.

MidasLamb commented 1 year ago

doing mirrord exec go run . works, am currently trying to make it work with dlv, but having some issues connecting to the debugger (also when running locally, I'll let you know once I get some more info for that).

But with mirrord exec --fs-mode read go run . it still seems to read from the local disk rather then from the pod (there is a config file that is mounted into the pod that it can't find, however I can list my own home directory succesfully)

aviramha commented 1 year ago

Regarding the second point - read is the default mode so no need to specify that. We have default overrides for local/remove - i.e some paths are read locally by default and needs to be overriden. Can you try to create a mirrord.json file, and use mirrord exec -f mirrord.json with that file? The file should contain the following (you can change it as you wish besides the fs part :)):

{
    "accept_invalid_certificates": false,
    "feature": {
        "network": {
            "incoming": "mirror",
            "outgoing": true
        },
        "fs": {
          "mode": "read",
          "read_only": ["/tmp/foo*+"]  
        },
        "env": true
    }
}

Change /tmp/foo*+ to your path, please notice it's a regex so in that case anything in the directory /tmp/foo should work

MidasLamb commented 1 year ago

Even with the fs object, it still can't find the file :/

aviramha commented 1 year ago

Can you run file your_binary and provide output here? P.S if you want we can schedule some time to debug this interactively. Feel free to pick time here

MidasLamb commented 1 year ago

I'll book a slot, but taking a quick look it'll probably be next week! From the command line I've been running it with go run, however when I first do go build and then try to run it with mirrord I again get the segmentation fault:

SIGSEGV: segmentation violation
PC=0x7f9070b532dd m=0 sigcode=1

goroutine 1 [syscall, locked to thread]:
syscall.Syscall(0x7f90706a4108?, 0x10?, 0x544c760?, 0xc000040210?)
        /usr/lib/go/src/syscall/syscall_linux.go:68 +0x27 fp=0xc0000aba30 sp=0xc0000ab9c0 pc=0x8c9407
internal/syscall/unix.IsNonblock(0xc0000aba88?)
        /usr/lib/go/src/internal/syscall/unix/nonblocking.go:16 +0x2d fp=0xc0000aba60 sp=0xc0000aba30 pc=0x8e23cd
os.NewFile(0xc0000abad8?, {0x35fa10f, 0xa})
        /usr/lib/go/src/os/file_unix.go:103 +0x28 fp=0xc0000aba98 sp=0xc0000aba60 pc=0x8f25e8
os.init()
        /usr/lib/go/src/os/file.go:66 +0x1e5 fp=0xc0000abac0 sp=0xc0000aba98 pc=0x8f6645
runtime.doInit(0x535f500)
        /usr/lib/go/src/runtime/proc.go:6329 +0x126 fp=0xc0000abbf0 sp=0xc0000abac0 pc=0x84a1c6
runtime.doInit(0x5359720)
        /usr/lib/go/src/runtime/proc.go:6306 +0x71 fp=0xc0000abd20 sp=0xc0000abbf0 pc=0x84a111
runtime.doInit(0x535c900)
        /usr/lib/go/src/runtime/proc.go:6306 +0x71 fp=0xc0000abe50 sp=0xc0000abd20 pc=0x84a111
runtime.doInit(0x536bc00)
        /usr/lib/go/src/runtime/proc.go:6306 +0x71 fp=0xc0000abf80 sp=0xc0000abe50 pc=0x84a111
runtime.main()
        /usr/lib/go/src/runtime/proc.go:233 +0x1d3 fp=0xc0000abfe0 sp=0xc0000abf80 pc=0x83cdd3
runtime.goexit()
        /usr/lib/go/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000abfe8 sp=0xc0000abfe0 pc=0x86ee01

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        /usr/lib/go/src/runtime/proc.go:363 +0xd6 fp=0xc000098fb0 sp=0xc000098f90 pc=0x83d1d6
runtime.goparkunlock(...)
        /usr/lib/go/src/runtime/proc.go:369
runtime.forcegchelper()
        /usr/lib/go/src/runtime/proc.go:302 +0xad fp=0xc000098fe0 sp=0xc000098fb0 pc=0x83d06d
runtime.goexit()
        /usr/lib/go/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000098fe8 sp=0xc000098fe0 pc=0x86ee01
created by runtime.init.6
        /usr/lib/go/src/runtime/proc.go:290 +0x25

goroutine 3 [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        /usr/lib/go/src/runtime/proc.go:363 +0xd6 fp=0xc000099790 sp=0xc000099770 pc=0x83d1d6
runtime.goparkunlock(...)
        /usr/lib/go/src/runtime/proc.go:369
runtime.bgsweep(0x0?)
        /usr/lib/go/src/runtime/mgcsweep.go:278 +0x8e fp=0xc0000997c8 sp=0xc000099790 pc=0x82792e
runtime.gcenable.func1()
        /usr/lib/go/src/runtime/mgc.go:178 +0x26 fp=0xc0000997e0 sp=0xc0000997c8 pc=0x81c506
runtime.goexit()
        /usr/lib/go/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000997e8 sp=0xc0000997e0 pc=0x86ee01
created by runtime.gcenable
        /usr/lib/go/src/runtime/mgc.go:178 +0x6b

goroutine 4 [GC scavenge wait]:
runtime.gopark(0xc00005a070?, 0x39bce80?, 0x1?, 0x0?, 0x0?)
        /usr/lib/go/src/runtime/proc.go:363 +0xd6 fp=0xc000099f70 sp=0xc000099f50 pc=0x83d1d6
runtime.goparkunlock(...)
        /usr/lib/go/src/runtime/proc.go:369
runtime.(*scavengerState).park(0x544b960)
        /usr/lib/go/src/runtime/mgcscavenge.go:389 +0x53 fp=0xc000099fa0 sp=0xc000099f70 pc=0x8259d3
runtime.bgscavenge(0x0?)
        /usr/lib/go/src/runtime/mgcscavenge.go:617 +0x45 fp=0xc000099fc8 sp=0xc000099fa0 pc=0x825fa5
runtime.gcenable.func2()
        /usr/lib/go/src/runtime/mgc.go:179 +0x26 fp=0xc000099fe0 sp=0xc000099fc8 pc=0x81c4a6
runtime.goexit()
        /usr/lib/go/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000099fe8 sp=0xc000099fe0 pc=0x86ee01
created by runtime.gcenable
        /usr/lib/go/src/runtime/mgc.go:179 +0xaa

rax    0xffffffffffffffda
rbx    0xffffffffffffffda
rcx    0x7f9070b53298
rdx    0x0
rdi    0x0
rsi    0xc0000ab970
rbp    0xc0000ab9b0
rsp    0xc0000ab968
r8     0x0
r9     0x7f9070b52f82
r10    0x0
r11    0x216
r12    0x48
r13    0x0
r14    0x0
r15    0x3
rip    0x7f9070b532dd
rflags 0x10206
cs     0x33
fs     0x0
gs     0x0

running file on the built binary gives:

pluglet-golang-zaza: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=dfddfb078c60ffbaac416b394e1cd02f7839b704, for GNU/Linux 4.4.0, with debug_info, not stripped
aviramha commented 1 year ago

Debugging this deeper, this is most likely a init function being called. We found an issue with one of the syscall trampoline cases where the arguments got corrupt but still segfault on return. Waiting for dependencies list to be able to reproduce locally.

aviramha commented 1 year ago

We managed to create a minimal reproducible example - this happens when importing specific modules.

package main

import _ "rogchap.com/v8go"
import _ "go.kuoruan.net/v8go-polyfills"
import "fmt"
import "os"

func main() {
    fmt.Println("asdsad")
    }
aviramha commented 1 year ago

Update: Digging further, It seems that r14 register gets clobbered when Frida trampoline returns for us. This r14 holds the g variable which is null when caller checks. Not sure why it happens here and how to solve yet. Note to self: Re-implement the original assembly to see if it's Frida's fault or ours.

aviramha commented 1 year ago

Fixed with #948