Origen-SDK / o2

MIT License
4 stars 0 forks source link

STIL Parser and AST Compiler Improvements #96

Closed ginty closed 4 years ago

ginty commented 4 years ago

This PR adds a (hopefully) fully functional STIL parser and some initial compilation of it.

This gave the AST compiler a pretty good workout and there are a few tweaks as a result of this, but nothing too major. One change that might affect other processors is that node.process_children() now returns an array of the children (rather than a node with modified children) and the original behavior is now provided by node.process_and_update_children(). I also cleaned up some of the internal file organization, but this should not really affect other code. e.g. moved all node definitions to generator/nodes.rs and split out the node struct to its own file.

I'll continue to work on this from here, but wanted to get the AST/compiler updates in as I think they should be stable after this.

coreyeng commented 4 years ago

So the process_children now flattens the children of the node, correct? Consuming the original parent? I only use process children in the Test node so far, but I think I still want to maintain that.

coreyeng commented 4 years ago

Looks good though! Starting on the de-compiling and conversion stuff early this time. This parses everything to the STIL nodes, correct? Would it then processor further to the more generic ones which another generator would use?

ginty commented 4 years ago

Hi @coreyeng,

So the process_children now flattens the children of the node, correct? Consuming the original parent? I only use process children in the Test node so far, but I think I still want to maintain that.

There's really not much of a change here, basically it used to be:

node.process_children(processor); // =>  <node_copy, children: [<processed_c1>, <processed_c2>]>

and now it is:

node.process_children(processor); // =>  [<processed_c1>, <processed_c2>]

The original behavior is now called via node.process_and_update_children(processor). Basically this is optimizing for this very common use case when developing processors:

for n in node.process_children(processor) {
  // Do something with the fully processed child nodes
}

This parses everything to the STIL nodes, correct? Would it then processor further to the more generic ones which another generator would use?

Yeah, this fully parses everything into an AST instead of skipping the vectors like in O1 - I figured we are not going to have to really worry about the speed of this. The Pest crate I used here to write the parser is also really good, it works along the same lines as Treetop but just feels a notch more professional and robust. For sure this is best parser I have ever written in my (admittedly limited!) parser writing career.

I think its also working out well having all the available nodes in Origen defined as kind of the one type, as it means that its easy to create processors that transition something like STIL nodes into native Origen nodes. So the flow I envisaged here was:

As I'm writing this though, I wonder if we should be thinking about using a STIL AST as the main internal representation of vectors rather than coming up with our own. We wouldn't be limited to just STIL of course and could extend it with additional things that is doesn't cover, such as microcode and ATE-specific instrumentation control (let's call this STIL+). However, for regular vectors I think STIL does a really good job and has things like loop and match loops well covered and I really like how it can handle both cyclized (tester ready vectors) and non-cyclized (pin changes expressed as timed events) data.

Turning non-cyclized data into cyclized vectors would be handled eventually (let's get cyclized data under our belt first) and then cyclized STIL+ vectors would be the thing that all of our backend generators would handle. Origen APIs like tester.cycle and pin.drive would just generate the respective STIL nodes into the AST. ATE-specific parsers would either parse directly into STIL nodes or else be quickly transformed into them, and then they could enter the regular STIL processing flow.

I'm not sure how much you've done though with respect to a native Origen vector representation yet?

ginty commented 4 years ago

Oh, meant to say...I had some real fun and games writing this parser because Rust doesn't handle recursive method calls at all well.

I originally had something like this to turn the parsed string data into an AST:

fn to_node(token: Token) -> Node {
   // Create the node for the token, then create nodes for its children:
   for t in token {
     node.add_child(to_node(t));
   }
   node
}

That worked fine but after only getting about 8 levels deep it crashed for stack overflow.

I was a bit surprised that Rust allowed such a thing, but seemingly writing code like this is an anti-pattern in Rust and instead it should be written as a procedural loop.

I can go into more details later if you like later, but for now just be aware that if you write functions that call themselves then it won't end well.

ginty commented 4 years ago

Thinking about it some more, probably there is a need for a more Origen-optimized representation for the backend generators to deal with, e.g. to represent DUT object references as an ID rather than a name.

coreyeng commented 4 years ago

Thanks! That all still looks good to me and that flow is what I was expecting as well: eventually get back to a 'native-Origen' representation. I haven' done anything more with vector representation. I added a bit more complexity to pin transactions, but that's about it and the spirit of the AST I'm confident remains the same. I also have a processor I use to optimize pin transitions, but its still not 'vectorized' until the renderer gets a hold of it. I have V93K stuff generating and was going to look into extracting that out into a vector-based-renderer trait and see what reuse I can get out of that and I was envisioning that being the 'highest level' point where vectors would come in to play.

I've used recursive calls in Rust and they seemed fine. Though, I never got very deep. Actually, I don't even know if I ended up using them in the end. I think I did away with most of them when I transition pins to using IDs.