Open MicahGale opened 1 year ago
So initial results:
You need to write a codon script AFAIK in python-like syntax and then run/compile that.
I got one test case working in test.codon
from python import mcnpy
from python import numpy as np
from python import time
times = []
for i in range(10):
start = time.time()
problem = mcnpy.read_input("big_model.imcnp")
end = time.time()
times.append(end - start)
print("max", max(times), "min", min(times))
print("average", np.mean(times), "std dev", np.std(times))
I also ran a similar thing in ipython.
Results from codon:
max 14.552344560623169 min 12.914937257766724
average 13.605058789253235 std dev 0.57067858954262
results from ipython
max 10.400793075561523 min 7.664640665054321
average 9.345700883865357 std dev 0.7869787794326515
This confirms what @tjlaboss found:
Many of the functions that lack Codon-native implementations (e.g. I/O or OS related functions) will generally also not see substantial speedups from Codon.
Important reference: https://docs.exaloop.io/codon/interoperability/python
In GitLab by @tjlaboss on Mar 15, 2023, 18:45
I don't think Codon works like that anyway; MCNPy itself would need to be written in Codon or make use of the @codon.jit
decorator. from python import mcnpy
would still be using Python for mcnpy.
Ohhhh.
Have you found a way to compile a module?
Github: https://github.com/exaloop/codon
And of course MIT news: https://news.mit.edu/2023/codon-python-based-compiler-achieve-orders-magnitude-speedups-0314