Open ignoramous opened 1 month ago
cc: @manninglucas
Looks like a use-after-free to me. This could be coming from inside netstack or from your application which uses reference counted parts of the netstack API. Looking through the netstack code nothing sticks out to me. UDP isn't too complex and that part of the API is fuzzed by syzkaller so I would surprised if there was something obviously broken there. Do you know which net protocol this connection was using (ipv4 vs ipv6)?
Also, your "steps to reproduce" doesn't give me enough information on how to build and reproduce this issue myself. Adding some detail there will make it easier for me to help you debug this issue.
Also, how frequently is this crash happening? Is it occasional or every time you try to run this code?
This could be coming from your application which uses reference counted parts of the netstack API.
Quite possible. The two places we do use the netstack's refcounting APIs comes from code adopted from netstack's fdbased/endpoint.go
(loc1) and fdbased/processors.go
(loc1, loc2, loc3) for our repo (mostly to support swapping fd
s to avoid creating a new LinkEndpoint
).
Do you know which net protocol this connection was using (ipv4 vs ipv6)
UDP IPv4 (it looks like a QUIC connection requested by 10268
which is Instagram).
udp: b85e6888d0a12280 (proxy? Exit) 192.168.0.144:40058 -> 157.240.23.128:443 for uid 10268
"steps to reproduce"
Apologies. What our Android app does:
gonet
) UDP (ref) & TCP handlers (ref).gonet.TCPConn
/ gonet.UDPConn
to an actual egress (remote) connection to the same destination (upload: io.Copy(egressConn, gonetConn)
and download: io.Copy(gonetConn, egressConn)
) (ref).The nil ptr is hit by gonet.UDPConn.Read() called by (upload) io.Copy
.
Also, how frequently is this crash happening? Is it occasional or every time you try to run this code?
Rare. Around once a week (uptime).
Like you point out, this crash could totally be due to our app's incorrect use of ref-counting APIs.
Thanks for the extra info. I wasn't able to determine the root cause after looking through both the gVisor UDP code and your code for some time. I will be AFK until next week, but will look at it again when I get back. In the meantime, @kevinGC could you take a look and see if you can find any ref counting issue here?
Took a look and, while I don't have anything definitive, I wonder whether it could be related to the use of goroutines that starts in firestack/intra/netstack/udp.go:udpForwarder
. For example, the function passed to NewForwarder
:
h.Proxy
or h.ProxyMux
Proxy
spawns another goroutine via core.Go(..., forward, ...)
forward
spawns another goroutine to call upload
I think that the first goroutine spawned here now has a pointer to a PacketBuffer
that it never IncRef
s or DecRef
s, although I'm surprised this doesn't cause a memory leak; when a udp.ForwarderRequest
is created in udp.Forwarder.HandlePacket
, it calls pkt.IncRef
and AFAICT that's never undone via DecRef
.
Perhaps try DecRef
ing the packet after calling CreateEndpoint
-- it would at least test whether memory is leaking.
Thank you.
Perhaps try DecRefing the packet after calling CreateEndpoint -- it would at least test whether memory is leaking.
Looks like IncRef was introduced in ~Feb 2023 to resolve https://github.com/google/gvisor/issues/8448#issuecomment-1411148407 I couldn't find a way to DecRef ForwardRequest's PacketBuffer since it isn't exported.
iirc, none of the other FOSS projects (ex1, ex2) we looked at (at the time) DecRef'd in TCP/UDP Forwarders. gVisor's tests don't either:
when a udp.ForwarderRequest is created in udp.Forwarder.HandlePacket, it calls pkt.IncRef and AFAICT that's never undone via DecRef.
Could DecRef be defer'd in ForwarderRequest.CreateEndpoint instead (udp.Forwarder provides no other way to process the handled PacketBuffer anyway other than ForwarderRequest.CreateEndpoint)?
Looking now, I may have been wrong in #8458. It should probably be up to the caller of NewForwarder
to IncRef
the packet. I think I looked at the TCP fowarder and naively copied it, but the TCP forwarder has to IncRef
because it starts a new goroutine. That's not true for UDP.
In any case, a leak isn't causing the panic.
Description
buffer.View.chunk
is nil'd inbuffer.View.Release()
: https://github.com/google/gvisor/blob/8db16e88598170344d74c4aca32b90e58b9f7c43/pkg/buffer/view.go#L108And then transport.udp.endpoint.go possibly
DecRef()
s an already released buffer: https://github.com/google/gvisor/blob/8db16e88598170344d74c4aca32b90e58b9f7c43/pkg/tcpip/transport/udp/endpoint.go#L233One possibility is, the buffer was racy released bytransport.udp.Close()
:rcvMu
is held, and so this edge case is unlikely.This crash was reported by our Android app (cgo): https://github.com/celzero/firestack/issues/74
I don't understand this code well enough to propose a fix (at a loss how pkg
ref_template
even works; for ex, where/howchunkRefs
defined or init'd?):https://github.com/google/gvisor/blob/8db16e88598170344d74c4aca32b90e58b9f7c43/pkg/buffer/chunk.go#L77
Steps to reproduce
with
io.Copy(dst, gonet.UDPConn)
runsc version
docker version (if using docker)
No response
uname
No response
kubectl (if using Kubernetes)
No response
repo state (if built from source)
No response
runsc debug logs (if available)
No response