olofk / edalize

An abstraction library for interfacing EDA tools
BSD 2-Clause "Simplified" License
646 stars 193 forks source link

Grid of workers for specified eda tool #6

Open Nic30 opened 5 years ago

Nic30 commented 5 years ago

Hello,

I was working on similar project.

My problem was that Vivado has too slow start and it was much more effective if it was running as a backend server and the client was sending the jobs for it. It showed up to be a good idea because then it was easy to build building grid server. Also real-time communication with TCL interpret in Vivado is much better because it shows the errors exactly when they happen and allows exception handling.

Do you think that code for this would be usefull for you? Also It would be great if we have an abstraction library for XDC and other constraint file formats. What you are planing to do in this library, maybe I can help?

olofk commented 5 years ago

Hi,

This is interesting. I have already planned for a long time to make it possible to send jobs to a server farm, but on a slightly different level so that it would work for all tools. But I didn't know Vivado had such a mode already. It would be nice to support this if it isn't too much work. I'm wondering how edalize would need to be extended to work with this. What extra parameters do you need? A server IP/port? Can we use this mode locally and somehow dynamically start a Vivado instance?

Regarding your second question, what aspects of the constraint files are you thinking about? I would say that the xdc (and other FPGA tool constraint formats) roughly consists of pin assignments, timing constraints and other options. For pin assignments I would love to see an abstraction library. There are already several tools (e.g. migen, rhea) who have done their own versions of this, but it would be much better with a common library that could be reused by different projects. But I don't think it would make sense to put this inside of edalize. Or maybe? Not sure actually. Regarding timing constraints and other options I think it's hard to abstract this as those things are so tech-specific.

My intention with edalize is to provide abstractions for the most common things and then slowly expand from there. I would say the most requested feature right now is abstractions for DPI, so that's something I surely would like some help with. Another thing I would love to see is an abstraction library for FPGA programming tools that I could use in edalize. There are already support for a couple of tools (impact, vivado, quartus), migen support a couple of more and apio supports another subset. Having all these available in a common library would be great

Nic30 commented 5 years ago

didn't know Vivado had such a mode already

Vivado has remote build mode but it is not what I was talking about. I just wrapped Vivado I mean I started it in TCL mode and then use subprocess communication methods to control the running instance. And then I used simple TCP to copy files and communicate.

What extra parameters do you need?

ip:port, default behaviour is to execute it locally if no server available

There are some build servers. Build server is a python script (TCP server for commands and FS IO, and it is also an object which can be controlled locally). This script can run Vivado/Quartus in TCL mode and bypass TCL io over TCP. Execution of Vivado is on demand as well as killing. (It is just simple terminal and FS bypass it would work with any software.)

There is a API similar to your library which can communicate with build server to get job done.

the constraint files are you thinking about?

I see repetitive settings of pin mapping, IO standards and delays, clk properties and dommain crossings as a most repetitive and I would like to have just one library which will have modifiable default configuration for common boards so we do not have to always duplicate and debug it.

There are already several tools (e.g. migen, rhea)

I know that is why I stopped in development in my custom solution and started searching for existing one https://github.com/Nic30/hwt/commit/edf001d59f01b22707c3d72a50a500ff302273bc#diff-04c6e90faac2675aa89e2176d2eec7d8R134

but it would be much better with a common library

It is required because otherwise anyone would be able to just package some "fpga design component" for any target device.

other options I think it's hard to abstract this as those things are so tech-specific.

It means that the system has to be extensible. We can not ever support things like fixed node placement etc. but some users may need them for example DDR controllers.

abstractions for DPI

What do you need, we are using DPI but it was decided to get rid of System Verilog for verification because there is simply always some problem with functionality/extensibility and distribution. Verification is must have, but System Verilog is not only tool for verification. I am currently reimplementing UVM for C++ and Python. It will be done on start of 2019 Q2. But it is already working for thinks like 4x10G NIC with AXI4 DMA with integrated MMU. (I mean there are only simple methods for detecting of coverage, randomization etc. but you have a full python3. ) And as it is C++ interface to things like driver or Qemu can be made easily.