A somewhat interesting problem: we were running a service that takes in encoded transactions as user input, then deserializes them to do some things. We then saw a bunch of out of memory errors, which was weird because our application never got close to the limit we had set for it.
After a bit of tracing, we tracked down the issue to bad user input. Specifically, the current implementation of the transaction decoder (and possibly others) doesn't do super strong validation of the incoming data, and somewhat naively allocates slices based on certain byte offsets. For example, this was our problem line: https://github.com/gagliardetto/solana-go/blob/main/message.go#L452
Because of the bad input, Go tried to allocate 0x163e65afc58 ~= 1423 GB worth of slices. I think we need checks to the ensure transaction truly requires that much allocation (which isn't valid in Solana anyway).
A somewhat interesting problem: we were running a service that takes in encoded transactions as user input, then deserializes them to do some things. We then saw a bunch of out of memory errors, which was weird because our application never got close to the limit we had set for it.
After a bit of tracing, we tracked down the issue to bad user input. Specifically, the current implementation of the transaction decoder (and possibly others) doesn't do super strong validation of the incoming data, and somewhat naively allocates slices based on certain byte offsets. For example, this was our problem line: https://github.com/gagliardetto/solana-go/blob/main/message.go#L452
See the runtime stack:
Because of the bad input, Go tried to allocate
0x163e65afc58
~= 1423 GB worth of slices. I think we need checks to the ensure transaction truly requires that much allocation (which isn't valid in Solana anyway).