Closed arkanoid87 closed 1 year ago
In theory it wouldn't be difficult, but I'm not sure about the best way to make it a user option. Generally while currently there are some explicit float
conversions thrown into the code, those could all become T
conversions, where T
is some globally defined type that is the internal data representation. The important one is the initial definition of Unit = distinct float
here https://github.com/SciNim/Unchained/blob/master/src/unchained/core_types.nim#L7) (conversion could also be done via Unit.distinctBase
now that I think about it).
By default a user definable T
would be float
. The question is how to change that without having to hack around in the code? Try to read some file at CT that may contain a type? I don't think there's a way to override a type via a define
. Only int, str and bool are possible via -d:foo=bar
. One alternative would be:
const Bytes {.intdefine.} = 8
type
when Bytes == 8:
Unit = distinct float64
elif Bytes == 4:
...
That would be relatively neat.
Regarding int
: Integers are not really supported in any sense. You may just "think" they are supported in the sense that you can use integer literals in some contexts. But that's just the nim compiler auto converting literals.
edit: and integers more generally are not really sensible when dealing with units, I'd say. Yes, for some operations they are lossless, but once you include division it breaks. Given that there are units attached I wouldn't want to allow such lossy conversions (even if they are perfectly well defined).
The problem is that I am dealing with quite large Tensors, but currently I have to get rid of unchained to make those fit in ram, as I'm locked to Tensor[float64] when using units.
a compile time flag defining float size in bytes defaulting to 8 would be really neat imho
forget about ints, I'm aware that there are no discrete values here
a compile time flag defining float size in bytes defaulting to 8 would be really neat imho
If you're fine with that solution based on byte size of the float type for now (so currently means only supporting float32
and float64
), I can implement that tomorrow. At least cuts your RAM usage ~by half (depending on how much other stuff is non float).
I'd be super happy to immediately test this feature in my project! It would actually save me a lot of time, and memory. Cutting the memory usage of my inputs in half would basically mean that I can almost double the precision of my simulation without additional memory usage (that's actually the limiter)
See the comment in the PR. Hope this works for you.
Super! I'm downloading it right now, I'll provide some feedback asap
even without defining FloatBytes/UnitCompareEpsilon, I'm getting compile errors on previously working code with v0.3.2.
# switch("d", "FloatBytes=4")
# switch("d", "UnitCompareEpsilon=8")
.nimble/pkgs/unchained-0.3.2/unchained/units.nim(430, 14) Error: type mismatch: got <Minute>
but expected one of:
proc min(x, y: float32): float32
first type mismatch at position: 1
required type for x: float32
but expression 'x.tau_r' is of type: Minute
proc min(x, y: float64): float64
first type mismatch at position: 1
required type for x: float64
but expression 'x.tau_r' is of type: Minute
proc min(x, y: int): int
first type mismatch at position: 1
required type for x: int
but expression 'x.tau_r' is of type: Minute
proc min(x, y: int16): int16
first type mismatch at position: 1
required type for x: int16
but expression 'x.tau_r' is of type: Minute
proc min(x, y: int32): int32
first type mismatch at position: 1
required type for x: int32
but expression 'x.tau_r' is of type: Minute
proc min(x, y: int64): int64
first type mismatch at position: 1
required type for x: int64
but expression 'x.tau_r' is of type: Minute
proc min(x, y: int8): int8
first type mismatch at position: 1
required type for x: int8
but expression 'x.tau_r' is of type: Minute
proc min[T: not SomeFloat](x, y: T): T
first type mismatch at position: 2
missing parameter: y
proc min[T](arg: Tensor[T]): T
first type mismatch at position: 1
required type for arg: Tensor[min.T]
but expression 'x.tau_r' is of type: Minute
proc min[T](arg: Tensor[T]; axis: int): Tensor[T]
first type mismatch at position: 1
required type for arg: Tensor[min.T]
but expression 'x.tau_r' is of type: Minute
proc min[T](x: openArray[T]): T
first type mismatch at position: 1
required type for x: openArray[T]
but expression 'x.tau_r' is of type: Minute
expression: min(x.tau_r)
Downgrading to v0.3.1 (you forget to tag this version) or v0.3.0 fixes it.
I've faced the min
vs Minute
namespace fight in the past, but I made it work for previous versions. I'll dig deeper to find out what is going on and possibly reproduce a minimal example
Downgrading to v0.3.1 (you forget to tag this version) or v0.3.0 fixes it.
Oh, sorry. Just forgot to push that tag. Just pushed it. Can you check the issue was or wasn't present on v0.3.1
?
edit: It's likely going to be a regression of v0.3.1
due to it being a pretty big version with large changes.
I confirm that the issue is present in 0.3.1 too
would git bisect be helpful here?
Not a git bisect, but a small example that shows the issue (unless this happens with any usage of min
for you?)
import unchained
echo 60.Second.to(Minute)
Fixed in v0.3.3
.
Afaik currently only int/float64 units are handled. How difficult would be introducing 32/16 bit variants?