Open unbtorsten opened 2 years ago
@unbtorsten With what level of parallelism (-j parameter for make) are you running the fuzzers? I think lower values here should reduce RAM consumption (at the expense of processing time of course)
@unbtorsten With what level of parallelism (-j parameter for make) are you running the fuzzers? I think lower values here should reduce RAM consumption (at the expense of processing time of course)
I limited threads (for make and for vivado) to 2 when running the fuzzers. However, I think the Python scripts are not affected by this.
This is the specific script that could use refactoring: https://github.com/SymbiFlow/prjxray/blob/master/fuzzers/074-dump_all/create_node_tree.py
The current and naive approach of generating the entire structure and then writing it to file reaches past 30GByte (~250GB according to jrrk2) these days and it'd be nice to be able to run it on "smaller" machines (at the cost of processing time, of course). From my understanding, the tree contains topological information about the raw FPGA fabric, i.e. how is the actual hardware interconnected (that is, memory cells, signal processing blocks, lookup tables, ... which can be be selected/programmed to fulfill specific functions).
I think this is the type of file that is generated: https://raw.githubusercontent.com/SymbiFlow/prjxray-db/master/artix7/xc7a100t/node_wires.json
For artix_50t, I am able to run the scripts. In particular, 074-dump_all/Makefile executes '074-dump_all/generate_after_dump.sh' which in turn initiates 'create_node_tree.py'. The memory use of the script peaks at 6.5GByte for this part.
Due to lack of documentation (and no really helpful input from symbiflow's slack channel), I am trying to reverse-engineer and understand create_node_tree.py. My goal is to serialize processing of data, and to reduce the peak memory requirement.