Open mr0re1 opened 1 year ago
What do you think about waiting until we have a unified LWE/RLWE dialect? I agree that lowering to arith is weird, but if you want to go through poly now, you could lower to a combination of to_tensor and from_tensor, using tensor/math/arith ops for the rescaling and rounding.
you could lower to a combination of to_tensor and from_tensor, using tensor/math/arith ops for the rescaling and rounding.
That what I'm currently doing, so output is still in poly
. My concern was about "going level too deep" during lowering.
What do you think about waiting until we have a unified LWE/RLWE dialect?
I think it will pose the same problem. I will proceed with crude lowering approach and send out PR to gather concrete feedback.
I'm having difficulty with lowering
ModulusSwitch
, I was trying to lower directly toarith
, but faced avalanche of complications (casts, signedness, overflows). Also it feels idiomatically incorrect to lower straight toarith
, bypassingpoly
.@asraa , @j2kun , Would it be reasonable to have a
Poly_ScaleOp
that takespoly
argument andfloat
(either argument or attribute?) and returns element-wise multipliedpoly
(rounded down)?