byuccl / bfasst

Tools for FPGA Assurance Flows
Apache License 2.0
12 stars 4 forks source link

YAML flow with design creation / dynamic job creation #348

Closed jgoeders closed 7 months ago

jgoeders commented 7 months ago

I believe with the current implementation of bfasster.py, I can input a yaml script that points the tool to a directory with several designs, and the appropriate ninja file will be created to run all of these designs.

However, right now I'm working on a tool that will generate a bunch of random designs, which I then want to run using bfasster in parallel. Is there a way to do this?

Obviously I could run the tool and populate the directory of designs before invoking bfasster, but I'm wondering about a flow where the first step creates multiple designs, and thus multiple jobs. Any way to do this?

...it seems like a drawback of the ninja approach is that all jobs need to be known when the ninja file is constructed. Is there an easy way for one job to dynamically create new ones?

@KeenanRileyFaulkner @dallinjdahl ?

KeenanRileyFaulkner commented 7 months ago

I think if you created a script in ninja_utils to generate your random designs, that is robust enough to be stand alone, you could then create a tool to call it. On the tool level, you would need to specify the inputs and outputs, and that tool could then be incorporated into whatever flow you want. Essentially, the ninja_util script would just need to know how many designs to generate and what to name them when it is invoked. This info could then get passed to other tools in the flow(s). These other snippets would be templated in with the generated design paths before they are actually created. I don't know if @dallinjdahl has other thoughts though.

dallinjdahl commented 7 months ago

I would look at the error injector flow, as it generates lots of designs from a single one. The names of the jobs needs to be known, but the files don't have to exist. You just make the output list really long for the build snippet, and then specify the builds for each one. This would make the ninja file long, since you'd have to replicate the flow for each target, but one of the design decisions of ninja is to be an "assembly language" of build systems, so we can automate that generation on the python side if necessary. The parallelism should be handled by default, so no need to worry about that.

jgoeders commented 7 months ago

I'm going to close this as the functionality I need is already there. Indeed, at flow creation I know all of the jobs that will be created, so I can use the existing approaches.

In the future, should we actually have a flow where you don't know the jobs until runtime, we could re-open this.