Open leonardt opened 3 years ago
Another consideration is to have the m.GenVar
logic implicit in m.combinational/m.sequential/etc... logic
Also, I wonder if we could do something similar with parametrized types. Here's a simple example:
def make_CacheReq(x_len):
class CacheReq(m.Product):
addr = m.UInt[x_len]
data = m.UInt[x_len]
mask = m.UInt[x_len // 8]
return CacheReq
We could add something like
x_len = m.TypeVar('x_len', int)
@m.type_generator
class CacheReq(m.Product):
addr = m.UInt[x_len]
data = m.UInt[x_len]
mask = m.UInt[x_len // 8]
This could be useful/generalized to hwtypes
?
If we generalize this to types rather than focused on generators, then we may want to use the name TypeVar
(ala mypy) instead.
This would provide a solution for https://github.com/phanrahan/magma/issues/838
Bunch of unorganized thoughts on this:
This is interesting and would be really cool. One question I have is if it is critical for us to have the assert isinstance(alu16, ALU)
relationship for @combinational
. You could ask the same question for normal circuits as well, but I always felt that @combinational
-wrapped functions are meant to be viewed as functions. That is, the fact that they get transformed into circuits is opaque to the user and they should see it only as a function. Meaning that isinstance
relationship wouldn't make sense because functions are not really instances of anything (besides class <function>
).
I'm trying to think of the analog for how we do Circuit
<-> Generator
relationship, but it seems like there isn't really a good analogy for functions. Maybe thinking about them as templated functions (ala C++ template <typename T> T foo()
) is more apt, in which case this style is probably the best way to achieve that.
I think the machinery to allow symbolic arguments (i.e. GenVar's) to type constructors would be (a) a good amount of work, but (b) very powerful beyond the use case for @combinational
. Even our Generator2
classes could have this feature. It moves in the direction of having a defined meta-language for generators (cc @rdaly525). This could enable type-checking or property-checking at the generator level, or compiling generators directly as CoreIR generators. Seems like an interesting PL-y direction.
Related to (4): I think if we want to enable this feature just for @combinational
(where it seems to be the most immediately useful), perhaps simplifying the syntax will make it a lot easier to implement. Basically, it seems like a lot of work to robustly inject logic in Bits/UInt/SInt.__getitem__
to check for GenVar
. A simpler thing to get the ball rolling, avoiding any change to those types would be something like a template()
function which looks as follows: def template(T: type, *args): ...
. Then the code could look like
N = m.GenVar('N', int)
BitsN = template(m.Bits, N)
@m.combinational
def ALU(a: BitsN, b: BitsN, op: m.Bits[2]) -> BitsN:
...
Then @combinational
could automatically inspect the signature for any template types (which could just be a simple wrapper over the pair of (m.Bits, m.GenVar('N', int))
and then the logic is all in there to replace the symbolic GenVar with concrete values. Just think this could be a simpler way towards a proof of concept -- assuming too many people won't be locked into the syntax.
Hmm, thinking about it, we could drop the idea of having the generator/genvar relationship, and instead treat this more like parametrized typing where the function definition is "elaborated" based on the function arguments (rather than generating a function based on parameters). Basically, by invoking a combinational function with a set of types, it should check whether it matches the parametrization signature, and if so, generate a concrete version of the function with the parameters filled in. This would maintain the combinational -> function mapping and draws off the standard pattern found in statically typed language (parametric polymorphism). I think this is related to the C++ templates style idea.
I think it seems like a reasonable idea to extend combinational to work with a parametric type as a first pass. It introduces the types to the system without adding in any other requirements to the syntax that will lock us in. I think we can simply stage it as a "wrapper" combinational function that delays generation of the concrete version until we know the types (once the function is invoked).
I like that idea a lot. It's kinda like c++ template deduction. I imagine a combo. function becomes polymorphic by simply leaving out type annotations? One question is if to allow/disallow mixed typing -- some arguments strictly typed, other polymorphic.
I felt hesitant about the GenVar thing because it felt out of the spirit of python/magma. I think extending combinational to be polymorphic is a powerful feature and very much smells like the rest of magma. In fact, if we think about plain functions we use for arithmetic, we already do this:
def foo(x, y): return x + y
out @= foo(in0, in1)
I looked at doing something similar to this for hwtypes. My goal was to nicely specify bit width changing operators (concat
, ext
, ...) in a way mypy could understand. That ended up being rather involved and likely impossible for ext
.
However, if we just want to make:
N = m.GenVar('N', int)
@m.generator
@m.combinational
def ALU(a: m.Bits[N], b:m.Bits[N], op: m.Bits[2]) -> m.Bits[N]:
if op == 0:
return a + b
elif op == 1:
return a - b
...
Mostly syntax sugar for:
def make_alu(N):
@m.combinational
def ALU(a: m.Bits[N], b:m.Bits[N], op: m.Bits[2]) -> m.Bits[N]:
if op == 0:
return a + b
elif op == 1:
return a - b
...
I don't think that should be too hard.
I will take the moment to bike shed though and say names should be optional (because strings are annoying to metaprogram). I would much prefer the signature:
class GenVar(...):
def __init__(self, T: type, name: Optional[str]=None): ...
I don't know how meaningful this would be for hwtypes. Maybe if made hwtypes operate at the type level it would make sense but otherwise I don't immediately see how it could be used in hwtypes.
I pitched this idea once basically make operators on the types return the type of results of operator on values of the type e.g. BitVector[8] + BitVector[8] == BitVector[8]
, BitVector[8].concat(BitVector[8]) == BitVector[16]
, ...
We could then have symbolic values which would be fine and perhaps even solved for which would be cool, but seems like a big project.
This could enable type-checking or property-checking at the generator level...
See above, I think this is feasible but a lot of work.
... template idea ...
Why even have N? why not declare it a:
BitsN = template(m.Bits)
And do the exact same introspection?
It seems reasonable that it should be possible to simply rewrite combinational to a "macro style" function without adding io (and hence not needing annotated types).
@m.combinational(as_macro=True)
def relu(x):
if x < 0:
return 0
else:
return x
# becomes
def relu(x):
cond_0 = x < 0
return cond_0.ite(0, x)
In fact this is already implemented (its the ssa
pass).
One thought. The biggest issue we have encountered with PEak and generator variables is the need to declare constants. For example:
N = m.GenVar('N', int)
@m.generator
@m.combinational
def ALU(a: m.Bits[N], b:m.Bits[N], op: m.Bits[2]) -> m.Bits[N]:
if op == 0:
return m.Bits[N](0)
elif op == 1:
return m.Bits[N](1)
...
Might present issues.
Cool idea...seems related to abstract interpretation...
template(Bits) != template(Bits)
because we may want signatures like Bits[N] -> Bits[N] -> Bits[M] -> Bits[M]
. We just need to be able to differentiate and refer to various templatizations. Now that I think of this though, users would probably expect doing arithmetic etc on GenVars which would amount to implementing a mini-DSL for GenVars (e.g. Bits[N] -> Bits[N] -> Bits[N + 1]
). See "feasible but a lot of work" 😂I agree with this. Combinational doesn't even need to generate circuit definitions. I think this is preferred and we should consider moving combinational to be macro style (or "anonymous circuits").
Agree, think constants are tricky. Fortunately, I think most of the operators, including ite
are smart enough to deal with POD python values. So unless you need a specific bit-width constant (don't see a case), it may be straightforward?
Agree, think constants are tricky. Fortunately, I think most of the operators, including iteare smart enough to deal with POD python values. So unless you need a specific bit-width constant (don't see a case), it may be straightforward?
Issues come up if you want to mux constants.
Good point. Would need to do some logic to propagate bit-width from downstream, basically "stage" the mux.
I think there's still a case for combinational to generate a circuit definition for code size. E.g. every time it's used, if it's inlined versus treated as a hierarchical instance reference, there's definitely a tradeoff in compile time. I think it would be easy enough to have the user control this with an inline
option like in C. It would be nice to also allow control in the instance context (rather than marking a definition as inline, instead when calling a combinational function force it to be inlined into the current definition). Not sure what the default option should be though. For example, in the ALU for the riscv-mini processor, it seems natural to have this be a separate circuit just from an organizational perspective. It makes it easier to unit test (otherwise we'd have to wrap it in a circuit that declares the interface).
What is the value in treating them as macros/anonymous circuits? In software, this is useful for avoiding function call overhead (and because code size is never really a problem besides in embedded systems). In hardware, I'm not sure if there's a similar advantage? Perhaps for simulation, but they will do that downstream anyways. For compiler passes, inlining is probably not a good idea for performance, since we could reuse a pass transformation on a single circuit definition versus having to repeat it in every context that the function was used.
However, I think we could support optional type annotations and either generate a definition or inline the circuit depending on the user choice and input values. I'll have to ponder what the better default option should be (definition vs inline).
What is the value in treating them as macros/anonymous circuits? In software, this is useful for avoiding function call overhead (and because code size is never really a problem besides in embedded systems).
In C macros are used as a replacement for generics, which is how we would be using them here.
Now obviously macros are a poor alternative to well typed generics but building a template system seems much harder than building a macro system as we already have the tools to build the macro system.
For example:
#define add(x, y) ((x) + (y))
vs
template<typename T>
T add(T x, T y) { return x + y; }
To me it actually seems more natural for the logic inside a combinational
to be inlined by default, but I think having the option to do both (inline or circuit definition) is good. I actually think the logic to go from the macro-like thing to a circuit definition is not that hard. Here's one way to do it
def combinational(fn, ...):
untyped_ssa_fn = transform(fn) # standard passes on this fn we're already doing
arg_names = get_arg_names(untyped_ssa_fn) # inspect for fn argument names
class _ComboGen(m.Generator2):
def __init__(input_types: List[m.Type]):
name = ... # some function of @fn and @input_types
assert len(input_types) == len(arg_names)
self.io = m.IO(**{arg_names[i]: m.In(input_types[i]) for i in range(len(arg_names))})
inputs = [getattr(io, name) for name in arg_names]
outputs = untyped_ssa_fn(*inputs)
self.io += m.IO(**{f"O{i}": m.Out(type(output)) for i, output in enumerate(outputs)})
for i, output in enumerate(outputs):
port = getattr(self.io, f"O{i}")
port @= output
def _wrapped(*args):
ckt = _ComboGen([type(arg).unqualified() for arg in args])
return ckt(*args)
return _wrapped
may not be 100p correct but the idea...
Consider a simple ALU
Now we'd like to make it parametrized over the input bit width (8 becomes a variable 'N'). There's two options:
m.IO
declaration. Here's a concrete example: https://github.com/leonardt/magma_riscv_mini/blob/master/riscv_mini/alu.py#L20-L59make_alu
method that has a parameter, but then we lose the niceisinstance(circuit, generator)
relationship.Proposal:
Introduce a
GenVar
construct that is similar to theTypeVar
used in mypy for generics (https://mypy.readthedocs.io/en/stable/generics.html#defining-generic-classes). Also add anm.generator
decorator that introspects type annotations to discoverGenVar
s to construct anm.Generator
definition.Here's an example for the ALU:
Then, we could use
ALU
as follows:Various considerations:
m.generator
as a separate decorator so we can compose it with others like m.sequential and m.combinationalm.generator
to manually declare the generator interface (e.g.@m.generator(N=N, op_map=op_map)
this could be useful as documentation for complicated interfaces with multiple parameters where it's difficult to implicitly extract the generator interface from the function declaration (i.e. you have to parse the type variables to find the proper ordering and required declarations).GenVar
s (like we have to do withTypeVar
). It's not elegant, but it seems useful enough for defining these combinational circuits with basic parametrization patterns (the hope is to encourage more use ofm.combinational
by making defining generators easier).