Open netomi opened 12 months ago
Using the current version in my project results in this:
tn@proteus:~/workspace/netomi/ort$ time python-inspector --python-version 311 --operating-system linux --json-pdt /tmp/ort-PythonInspector676305276266227952/python-inspector16065407924165681261.json --analyze-setup-py-insecurely --requirement /tmp/ort-Poetry6870289546925138000/requirements.txt9586036788288994416.tmp
real 0m29,104s
user 0m12,745s
sys 0m0,353s
Using aiohttp:
tn@proteus:~/workspace/netomi/python-inspector$ time /home/tn/workspace/netomi/python-inspector/venv/bin/python /home/tn/workspace/netomi/python-inspector/src/python_inspector/resolve_cli.py --operating-system linux --python-version 311 --json-pdt /tmp/ort-PythonInspector676305276266227952/python-inspector16065407924165681261.json --analyze-setup-py-insecurely --requirement /tmp/ort-Poetry6870289546925138000/requirements.txt9586036788288994416.tmp
real 0m16,893s
user 0m8,284s
sys 0m0,439s
Thanks @netomi for raising this issue and your PR for fixing this : )
I am using python-inspector as part of the oss review toolkit and I noticed that the analysis of some python projects is rather slow though it has a poetry.lock file and thus the resolution should be straight-forward.
Some debugging revealed that most of the time is spent in downloading stuff from pypi. I could greatly speed this up using async io requests using the aiohttp module. Additionally, I would suggest to replace the existing cache with aiohttp-client-cache.
I will prepare a PR to showcase the use of async io and how you can gain considerable speedups using that approach by parallelizing the downloads from pypi instead of doing them sequentially which take a while if you have a larger number of dependencies.