Open proppy opened 2 years ago
Yes. Right after I published this, I learned that @donn was working on the same thing so we joined up which led to https://github.com/The-OpenROAD-Project/OpenLane/issues/652 What I'm still waiting for is an official released image from efabless with openlane+sky130 pdk
The functionality is basically waiting for me to flick a switch- what I'm worried about is that it'll overwhelm github actions :(
@donn maybe we could get those built using custom worker on GCP instead? https://antmicro.com/blog/2021/08/open-source-github-actions-runners-with-gcp-and-terraform/ ?
(happy to help with any of the setup that needs to be done)
alternativly I do have this cloud build recipe I've been using to build a derived notebook image, https://gist.github.com/proppy/cf84c89238a7aa3442358d4dbe009462#file-cloudbuild-yaml
@proppy That blog post is sadly little more than exposition. I tried following it and got nowhere fast. I would indeed appreciate help with the setup, as well an actual technical document on how to use it.
@donn I fully understand that enabling this will make CI much more cumbersome. You're calling the shots here but from my perspective I think it would be good to separate the flows for CI and releases. Users generally don't need the latest release. I'm still using OpenLANE v0.12 with a random PDK build from about year ago for most of my work because I haven't had a compelling reason to use a newer version. But I believe that an efabless-provided openlane+sky130 image would be a huge help for most people, including you. Much of the problems I see is related to installation issues and personally I'm still failing more often than not to follow the instructions because I miss to set an env var or run from the wrong directory. With an image like that, many (most?) users wouldn't ever need to touch the openlane repo or the PDKs unless they need something specific. Troublsehooting would also be much easier if people used a versioned image. "A: X doesn't work. B: Which image are you using? A: 1.63. B: Ah ok. That was fixed in 1.72" instead of keeping track of all the individual parts. I can produce an image like that and distribute but I think it really needs to come for you because we want to also be able to use this for the MPW precheck.
And also, perhaps we don't need to build it at all in the image? I see there are some CI actions in the open_pdks repo https://github.com/RTimothyEdwards/open_pdks/actions so perhaps we can just get ready-built PDKs from there and put in the image. Kind of like what I do in this repo but from an upstream source
@olofk It's not about the CI being cumbersome nor is it about me disagreeing with this strategy. Trust me, nobody more than me would like to just have the users clone and use exactly one image. It would save me a lot of pain.
What it is about, plain and simple, is the sheer actual volume of data from adding a full PDK build to the GitHub image has caused far faster computers than the GitHub Actions CI to lock up. And mind you, I am not discussing the build process here, I'm discussing just the result files.
There are already flows in place to build the result files, such as https://github.com/Cloud-V/sky130-builds. Problem is, that's a partial PDK build, i.e., just sky130_fd_sc_hd. So the quote-unquote "solution" here is to just have a different image for each SCL and just set sky130_fd_sc_hd to be the default. Additionally, PDK build results require a specific path, which is highly setup-dependent. Hence why this is not an official Efabless tool, rather, it's a stopgap solution I did for a research project.
Adding to that complexity, for example, is that we're adding support for asap7. So, do we include all PDKs in one image? Or do we build multiple images, one for each PDK and then again once for each SCL?
This is not as cut and dry as people believe.
Thanks for clarifying. I see that I didn't have full understanding of the issue and it seems you've been through these ideas. Let's keep looking for some reasonable middle ground then. Happy to assist where I can
So @donn, I've been doing some thinking and discussed this with @proppy. We think the best way forward could be to create a PDK manager. The thing is that we have pretty much the exactly same problem with SymbiFlow where there are currently a whole bunch of containers, each bundling the toolchain with the datafiles for a particular device. I did a quick hack a while ago that allowed users to download them on demand, cache them, pick different versions and report their path. A bit like pkg-config, but with download abilities. A user of an openlane image would then add something like -v$(pdk-config sky130:fd_sc_hd):/pdk
to download (if needed) the PDK of choice and then get its path. Current plans is to formalize what I have for symbiflow a bit and add sky130 support. Hopefully add asap7 and other pdks after that.
For the ASIC PDKs is does actually make more sense to store them outside of a container, since they will be used by other tools like simulators and spice which aren't necessarily bundled within the container.
Sounds like a plan?
@olofk Yes! That's something I've been considering.
Problem is the sky130A PDK needs to be made portable first. Currently it's tied to its install location.
@donn, a few other ideas we discussed w/ @olofk:
"Binary" release for open_pdks/sky130A
Basically if https://github.com/RTimothyEdwards/open_pdks had the Releases
tab enabled with versioned tarballs for each variant of the pdk with the content of ./configure --prefix={somewhere} --enable-{something}-pdk && make && make install
in it (assuming they are/become relocable), would that be enough for user to reliability depends on? Would that make the job of a tool like the one @olofk pointed in https://github.com/fusesoc/docker-openlane-sky130/issues/1#issuecomment-1020593990 easier?
"PyPI" wheels for skywater-pdk
Similary, the skywater-pdk
repo does have some python code assiociated it: https://github.com/google/skywater-pdk/tree/main/scripts/python-skywater-pdk, to report its location. Maybe that could be distributed on PyPI along side some functionality to download/cache a given variant?
"fusesoc" PDK packages
Another "wild" idea (but iirc @olofk wasn't fan) could be to distribute the pdk (or the tool to fetch/generate/cache them) as a fusesoc core, after all, as I currently understand it, the pdk standard cells do have some verilog files, spice models and potentially test benches associated to them (like most fusecore core do), would it makes sense to package them similarly and allow users to run flow/target/build stage for them as well as depend on them.
Also about @donn ealier point in https://github.com/fusesoc/docker-openlane-sky130/issues/1#issuecomment-1019905612
Adding to that complexity, for example, is that we're adding support for asap7. So, do we include all PDKs in one image? Or do we build multiple images, one for each PDK and then again once for each SCL?
I'm curious how frequent it is for a given project to switch between variant of a given PDK (or between PDKs), it might make sense to optimize the default distribution for the most common usecase (as well provide the pieces for developers than need to assemble something more custom).
Re relocatable, I just did a grep -r /
in the tar balls that @rtimothyedwards produces for the open_pdks CI.
In libs.ref all I could find things like libs.ref/sky130_fd_sc_hd/maglef/sky130_fd_sc_hd__dfrtp_1.mag:string GDS_FILE $PDKPATH/libs.ref/sky130_fd_sc_hd/gds/sky130_fd_sc_hd.gds
which seems fine to me as the PDK manager would help us figuring out how to set $PDKPATH
libs.tech was a bit more complicated. There are things like openlane/custom_cells/lef/sky130_ef_io_core.sh:$OPENLANE_ROOT/scripts/rectify_above.py -1.5 \
that has a dependency into the openlane repo which is something I would prefer to avoid
Also found a bunch of these qflow/sky130_osu_sc_18t_ms.sh:set leffile=/home/runner/work/open_pdks/open_pdks/pdks/pdk/sky130A/libs.ref/sky130_osu_sc_18t_ms/lef/sky130_osu_sc_18t_ms.lef
Were those the ones you're thinking of?
@proppy Re 2, I realized that the files aren't that large. At least the ones I'm looking at from the open_pdks CI artifacts are about 80MB so I think it could be perfectly fine to use pypi as distribution if that saves us some work. Re 3, I definitely think we should add FuseSoC core description files to the PDK builds so that we can easily pick up verilog models for simulation etc. I did that e.g. for the SRAM macros but I think it would make more sense to distribute the PDKs in a way that don't involve FuseSoC because I'm worried there's a fair amount of work needed in FuseSoC to make this smooth. But I might be wrong. Re havin targets in the core description files for the PDKs themselves I don't think there are any actions you can do with just the PDK. I see them more as dependencies of other things
@olofk in parallel I'll give a shot at packaging skywater-pdk and open_pdks with conda, see: https://github.com/hdl/conda-eda/issues/159 https://github.com/hdl/conda-eda/issues/160 as it'll make it easy to use them within https://colab.research.google.com/ environment (which doesn't support container, see https://github.com/googlecolab/colabtools/issues/299#issuecomment-615308778)
@mithro pointed me to https://docs.conda.io/projects/conda-build/en/latest/resources/make-relocatable.html and https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#detect-binary-files-with-prefix which seems to be (at least on paper) be effectively working around https://github.com/RTimothyEdwards/open_pdks/issues/60
Now that https://github.com/The-OpenROAD-Project/OpenLane/pull/846 I'm wonder what's missing to do the same thing in the official
efabless/openlane
image for the mpw shuttle?Can we use this issue to make a list?
I'd like to help :)