Open PABannier opened 4 months ago
Alternatively, I was considering implementing a ggml_flip operation (PyTorch's flip). This would allow us to implement a pad_reflect operation directly in 'user space' using flipping instead of adding a quite niche operation (pad_reflect) to ggml.
I don't have a good idea which option would be more useful, so whatever you think makes more sense - PR welcome
I like ggml_pad_reflect
as I need this functionality for tortoise.cpp. @PABannier can I have your permission to turn that code you have there into a PR for ggml so I can get this quickly upstreamed(I don't want to take your credit for the PR without asking).
I ask because I need a working version of this to proceed with tortoise so I'm going to have to integrate it into at least my local copy of ggml anyway.
@balisujohn Please go on! :)
Hello,
I have implemented a custom operation
ggml_pad_reflec_1d
on my ggml fork. This is required for Encodec.cpp. Should I upstream this operation? I would write aggml_pad_reflect
supporting 1D and 2D input, as in PyTorch's nn.ReflectionPad2dAlternatively, I was considering implementing a
ggml_flip
operation (PyTorch's flip). This would allow us to implement apad_reflect
operation directly in 'user space' using flipping instead of adding a quite niche operation (pad_reflect
) toggml
.What are your thoughts? @slaren @ggerganov