Open mtsokol opened 6 months ago
Paraphrasing my comments from Slack -- We should go with the first option, partially because that's the only backwards-compatible option. The concerns about a new kernel for each background value can be handled via a hybrid approach: We compile a new kernel iff something is an identity or an absorbing value of a registered function; otherwise we resort to treating the background value as a runtime value rather than compile-time constant.
Oh, that's interesting! So we would compile for 1, 0, inf, -inf. This seems like a very doable compromise, especially if we can add other "special" values to the compiler. We may want to do this in JuliaLand as well when we do broadcasts. A .+ 1
has the same issue in Julia
We should also figure out syntax to overload/avoid this behavior. If the default is to promote some values to constants, what is the way that we declare constant and what is the way we declare variable?
I just wanted to make a heads up: Tensor(Element(0.0))
is not very optimized for scalars. we should benchmark it to see what the overhead is before making changes, but I just thought I would give a heads up. This is in response to https://github.com/willow-ahrens/finch-tensor/pull/62
A thought: let's produce a "scalar" constructor which dispatches to either a StaticScalar
or a DynamicScalar
type, and also expose an interface to target those two directly.
Hi @willow-ahrens @hameerabbasi,
This issue is meant to discuss and decide the approach we would like to take in terms of handling scalars.
From existing discussion, when calling a function on a tensor and a scalar (
Tensor(...) + 1
), the scalar could be wrapped in aTensor(1)
and interpreted as:1
,0
, filled with1
,Tensor(Dense(Element([1])))
.