Closed xavidop closed 1 week ago
Is your remote pod restarting when this happens?
@aviramha no restarts in the remote pod
@trs can add more context if needed
I'm facing the same issue, when I use steal with an HTTP proxy . mirroring works fine
I'm facing the same issue, when I use steal with an HTTP proxy . mirroring works fine
Can you please elaborate? Did you check if the remote pod is restarting?
The remote pod has not restarted. Everytime I use the mode "steal" with a socks5 proxy running to connect to the remote cluster where my actual pod is deployed, I see the following error in my local machine.
Proxy error, connectivity issue or a bug. You may report it to us on https://github.com/metalbear-co/mirrord/issues/new?assignees=&labels=bug&projects=&template=bug_report.yml connection closed /home/appuser/entryPoint.sh: line 40: 17 Killed java -Djava.security.egd=file:/dev/./urandom -jar /home/appuser/app.jar
Correcting myself, its a socks5 proxy.
The remote pod has not restarted. Everytime I use the mode "steal" with a socks5 proxy running to connect to the remote cluster where my actual pod is deployed, I see the following error in my local machine.
Proxy error, connectivity issue or a bug. You may report it to us on https://github.com/metalbear-co/mirrord/issues/new?assignees=&labels=bug&projects=&template=bug_report.yml connection closed /home/appuser/entryPoint.sh: line 40: 17 Killed java -Djava.security.egd=file:/dev/./urandom -jar /home/appuser/app.jar
Correcting myself, its a socks5 proxy.
Would you like to share screen to debug it? I'm available on our Discord
Sorry, I'm working on my organization's machine, I dont think I can share my screen. I can provide logs though. Could you please tell let me know what kind of logs would you require to debug this?
Please use:
{
"internal_proxy": {
"log_level": "mirrord=trace",
"log_destination": "/tmp/mirrord.log"
}
}
then attach the log (with redacted sensitive info)
Please find it here :mirrord-logs.txt
Oh, you found a wonderful bug. Seems we're not handling well cases where remote cluster has ipv6.
2024-09-30T10:02:38.653317Z TRACE handle_agent_message{Close("Background task `Stealer` failed with `IPTables failed with `code: 2, msg: iptables v1.8.9 (legacy): host/network `fc00:1000::fa24' not found\nTry `iptables -h' or 'iptables --help' for more information.\n``")}: mirrord_intproxy: return=Err(AgentFailed("Background task `Stealer` failed with `IPTables failed with `code: 2, msg: iptables v1.8.9 (legacy): host/network `fc00:1000::fa24' not found\nTry `iptables -h' or 'iptables --help' for more information.\n``"))
Please follow up on this issue here https://github.com/metalbear-co/mirrord/issues/2788
thanks for the quick help!
Bug Description
Get this error about 50 times a day (Proxy error, connectivity issue or a bug...connection closed), so I need to disconnect my env, reconnect, then re-mesh. It feels like 1/3rd of my day is managing my mesh. I think its due to my wifi being weirdly unstable on my laptop, cause if I hotspot using my phone I don’t encounter it. But that’s not sustainable for me long term. Any suggestions would be appreciated. Having a retry mechanism will be very very helpful.
Steps to Reproduce
just use mirrord, connect and disconnect from internet
Backtrace
No response
Relevant Logs
No response
Your operating system and version
Mac OS
Local process
we are running yarn commands
Local process version
No response
Additional Info
No response