clash-lang / clash-compiler

Haskell to VHDL/Verilog/SystemVerilog compiler
https://clash-lang.org/
Other
1.44k stars 153 forks source link

SaturatingNum Bit #2827

Open gergoerdi opened 2 weeks ago

gergoerdi commented 2 weeks ago

I have a use case where Overflowing Bit, and thus a SaturatingNum Bit instance, would be useful. I see no reason Bit couldn't be made into SaturatingNum. Sketch(y) implementation:

instance SaturatingNum Bit where
    satAdd = \case
        SatWrap -> addOr 0
        SatBound -> addOr maxBound
        SatZero ->  addOr 0
        SatSymmetric -> addOr maxBound
        SatError -> addOr $ errorX "satAdd"
      where
        addOr ext = \x y -> if x == 1 && y == 1 then ext else x + y

    satSub = \case
        SatWrap -> subOr 1
        SatBound -> subOr minBound
        SatZero ->  subOr 0
        SatSymmetric -> subOr maxBound
        SatError -> subOr $ errorX "satSub"
      where
        subOr ext = \x y -> if x == 0 && y == 1 then ext else x - y

    satMul = \case
        _ -> (*)
kleinreact commented 2 weeks ago

As also mentioned on #clash, another alternative might be to use the already existing instance for BitVector 1, which probably results in more maintainable code, e.g.,

instance SaturatingNum Bit where
  satAdd mode a b = bitCoerce @(BitVector 1) @Bit $
    satAdd mode
      (bitCoerce @Bit @(BitVector 1) a)
      (bitCoerce @Bit @(BitVector 1) b)

  satSub mode a b = bitCoerce @(BitVector 1) @Bit $
    satSub mode
      (bitCoerce @Bit @(BitVector 1) a)
      (bitCoerce @Bit @(BitVector 1) b)

  satMul mode a b = bitCoerce @(BitVector 1) @Bit $
    satMul mode
      (bitCoerce @Bit @(BitVector 1) a)
      (bitCoerce @Bit @(BitVector 1) b)
gergoerdi commented 2 weeks ago
  satAdd mode `on` bitCoerce @Bit @(BitVector 1)

should work with Data.Function.on, for that sweet sweet code golfing:)

gergoerdi commented 2 weeks ago

How about

    satAdd mode = fmap (fmap unpack) (satAdd mode `on` pack)

This should work without any explicit type arguments on pack / unpack since one of their sides are known to be BitVector, and the other is Bit from the instance head and the method types.

gergoerdi commented 2 weeks ago

Aaaaaaaaaaand we have a winner:

import Data.Composition ((.:)) -- from `composition' package
import Data.Function (on)

instance SaturatingNum Bit where
    satAdd mode = unpack .: satAdd mode `on` pack
    satSub mode = unpack .: satSub mode `on` pack
    satMul mode = unpack .: satMul mode `on` pack
DigitalBrains1 commented 2 weeks ago

I'm not a big fan of using the pack function here; it is expensive in simulation. It also turns an XExcepton into an undefined bit, but I have no problem with that; in my opinion, a Bit shouldn't carry an XException value and if it does, I have no problem with turning it into what I think is the correct encoding: an undefined bit.

I think the code from the first post is fine (although I'd write satMul differently myself).

DigitalBrains1 commented 2 weeks ago

Ah, but the code in the first post turns an undefined bit into an XException X-D

So yeah, I'm in favour of code that doesn't use pack and also doesn't produce XExceptions. That'd be my ideal, but maybe the route through pack is forgivable. I'd rather not, though.

[edit] An alternative might be the Clash.Sized.Internal.BitVector.pack# function. It is cheap, and also turns a Bit into a BitVector. It'll leave any XException in, so it doesn't make the situation worse in that regard. [/edit]

[edit 2] TL;DR:

import Clash.Sized.Internal.BitVector (pack#)
import Data.Composition ((.:)) -- from `composition' package
import Data.Function (on)

instance SaturatingNum Bit where
    satAdd mode = unpack .: satAdd mode `on` pack#
    satSub mode = unpack .: satSub mode `on` pack#
    satMul mode = unpack .: satMul mode `on` pack#

Personally I'm more inclined to write it without (.:), but I've got nothing against other people doing it :-). [/edit 2]

DigitalBrains1 commented 2 weeks ago

The implementation for satMul is nicely symmetric with satAdd and satSub, but I still think satMul _ = (*) makes more sense. I'm not sure if the mode argument should be forced to WHNF.

Also, multiplication for Bit is actually smarter than all of (*), mul and satMul for BitVector. The BitVector ones don't notice when the least significant bits are known to be zero despite the presence of undefined bits (there's even more that could be determined, but that one is easy).

I think we should also discuss (elsewhere) why those BitVector functions throw XException instead of setting undefined bits.

>>> xbv = $(bLit ".")
>>> xb = unpack xbv :: Bit
>>> xbv
0b.
>>> xb
.
>>> 1 * xb
.
>>> 0 * xb
0
>>> xb * 0
0
>>> 0 * xbv
*** Exception: X: Clash.Sized.BitVector.* called with (partially) undefined arguments: 0b0 * 0b.

[edit]

Known least significant bits I think given the following definitions: ``` a = BV aMask aVal b = BV bMask bVal c = BV cMask cVal c = a * b az = countTrailingZeros (aMask .|. aVal) bz = countTrailingZeros (bMask .|. bVal) cz = countTrailingZeros (cMask .|. cVal) ``` then the following holds: ``` cz = az + bz ``` (thus leading to a trailer of known 0 bits) [edit] I corrected the math; I was thinking of trailing zeroes in both mask and value, not just mask. Somewhere lost track of my own reasoning. [/edit]

[/edit]