xtensor-stack / xtensor

C++ tensors with broadcasting and lazy computing
BSD 3-Clause "New" or "Revised" License
3.34k stars 399 forks source link

``xt::lerp`` like function #2684

Open DNKpp opened 1 year ago

DNKpp commented 1 year ago

Hey there, I've recently started using xtensor in one of my personal projects and was a bit surprised that I was not able to find a linear interpolation function. There indeed exists xt::interp but that doesn't seem to be the right fit (or I simply do not understand how to utilize it correctly). Am I missing something here?

tdegeus commented 1 year ago

xt::interp does exactly what NumPy's interp does : https://numpy.org/doc/stable/reference/generated/numpy.interp.html#numpy.interp . If that is not what you need could you present details what you are missing?

DNKpp commented 1 year ago

Thanks for the reply and the link. I'm not very familiar with numpy. After I've read the numpy docs, I'm pretty sure its not what I'm looking for. I'm searching for a function like std::lerp, just for the xt types. For example xt::lerp(v1, v2, t) (withv1 and v2 xt types and t a floating point value).

tdegeus commented 1 year ago

~I don't know about extrapolation, but otherwise I think that lerp fully mappable to interp. could you present a minimal example of your usecase?~ Ah, I guess I did not understand std::lerp ;). You could do the naive lerp from the example easily but you want simply an interface to std::lerp. In principle, that should be now issue and easy to do. Be it not that I don't get how an interface for arrays would be intuitive. I.e. what would be the advantage of writing a loop (what the xtensor function would do I guess)?

DNKpp commented 1 year ago

Hey, thanks again for ypur reply. I'm not sure if I understand you correctly. Perhaps its best to elaborate it more precisly. One of the most common cases what I know of is the unity lerp, which simply inter- or extrapolates a Vector defined by a start- (s) and end-vector (e) and an additional scalar parameter (t): s + t * (e - s) This indeed is not very complicated to create on top of xarray or xtensor, but IMHO it would be a good fit to the math part of that library, as its a very common use-case . To be honest, my implementation pretty sure leaves much room for potentially improvements. For example I totally left out the evaluation strategy parameter.

Greetings

tdegeus commented 1 year ago

I see. I would be happy to support this. Would you mind proposing a PR?

DNKpp commented 1 year ago

Well, I'm not sure if I can do that, so that it fits into the library, but I can have a look at it after my vacation. :)

tdegeus commented 1 year ago

Don't worry if you need help, give it your best try an we'll help to go all the way. (That being said, it will not be an overly complicated implementation neither I think, as you can just use std::lerp potentially with vectorisation)

DNKpp commented 1 year ago

Which c++ Version is xtensor targeting? I'm asking, because std::lerp was introduced in c++20.

EDIT: On the man-page it says c++14. Is that still the case?

tdegeus commented 1 year ago

Ah sorry, I did not catch that is was C++20. We are officially on C++14, however, C++17 seems to work perfectly fine, and we aim to bump to it officially soon. Indeed we also want to support https://github.com/wjakob/nanobind . For C++20 I don't think we can rely on that yet for those of us using HPC that is not always on the bleeding edge of compiler versions. I could live with a preprocessor enabler of C++20 functions though. However, we may need to deal with some blocking things that will surely pop-up.