Open Calvin-L opened 6 years ago
just to comment, when I was trying out a few examples in repo, the utilization of cores is not very good (by good, I mean it can't fully utilize all cores I have, despite the workload. I expect this though).
Do you mean that Cozy does not spawn enough threads to use all available cores? This can happen if your specification does not contain very many methods.
Each thread synthesizes an implementation for a single query operation. Exploiting idle cores by parallelizing that task would be very difficult, and likely even worthy of a workshop paper for the synthesis community. If you have any good ideas, please implement them!
In the meantime, I don't think finding more parallelism is a high priority. My experience has been that real-world specifications have quite a few methods.
Do you mean that Cozy does not spawn enough threads to use all available cores? This can happen if your specification does not contain very many methods.
Yes, but I do agree that parallelism is not a high-priority.
Right now Cozy spawns one thread for every method on the data structure (that includes one thread for each new method it adds---which can be quite a few).
Because of this behavior, Cozy's ability to handle large specifications is more limited by the number of cores on your computer than by your willingness to wait. I would prefer a balance: Cozy uses all available cores, but not more, at the cost of a longer run on machines with fewer cores.
My proposal is:
--timeout
flag in the documentation to be "timeout per method" rather than global timeouthigh_level_interface.py
, do not start new synthesis jobs until an idle core is availablehigh_level_interface.py
, run each job until it has run for--timeout
, rather than treating the timeout as a global timeout