Closed SebCorbin closed 7 years ago
Hi,
thanks for the suggestion. At present vprof
checks just specified module or package and it was done to make heatmaps look less noisy. It's not very hard to show all executed code, but there should be a better way 😄
btw, you can try to run your tests individually if it makes any sense
Even if I specify my test, the command always begins with manage.py
and this is always this file that is being profiled.
Is there a way to separate the command that is run from the file that is profiled? What is the part of vprof that I can modify to do that?
IIRC Django tests are Python standard unittest
tests and can be run manually, but it depends how your tests are structured.
You might try to modify add_code
and calc_heatmap
methods of CodeHeatmapCalculator
at vprof/code_heatmap.py
Chiming in to ask for this too - I wish the heatmap showed code in imported models. Without that, it's impossible to profile anything more than some simple demo code
EDIT: Can the heatmap just show all the python files called? I think that's better than having to chose a single file
I think it needs some happy medium. In some cases all Python files might be too much 😄
One of the options might be showing heatmaps for code that's not in standard library or any installed packages.
i've just pushed commit to master
that enables code heatmap calculation for all running code, not just specified modules. Output might look noisy, but I am thinking about skipping standard library modules if that makes sense.
I've added stdlib checker to code heatmap calculation and looks like it has significant overhead. Perhaps heatmap should be calculated during the post processing phase.
Heatmap excludes code from libraries now. I think I will play with it some time before releasing, but it looks way better now 😄
Nice work, although I would recommend checking against sys.path
instead of sys.prefix
, because sys.prefix
relates to virtualenv path, and I got the heatmap for my python path anyway (which is /usr/local/Cellar/python35/3.5.2/Frameworks/Python.framework/Versions/3.5/lib/python3.5/
)
Oh, thanks! Stdlib detection needs some improvement 😃
I updated stdlib detection, but I haven't tested virtualenv
interoperability yet.
Can you briefly describe environment you use to test this issue?
Thanks!
Improved virtualenv
handling.
Code heatmap seems to be working for my virtualenv
examples. I think it's ready for release as experimental feature.
Description
I tested you package against django tests, after I saw what you were capable of doing using code heatmap, which renders great by the way. So I thought I would give it a try on my Django tests because selenium is taking long time, and a heatmap would definitely reveal what parts are slow so I can focus on them.
How to reproduce
$ vprof -c h "manage.py test --failfast"
Actual results
The code heatmap is only showing the
manage.py
file, would it be possible to add an option to monitor my test files (i.e. tests.py or any other file).Expected results
I would like to have several files show up in the profiling result (that I pick beforehand)
Version and platform
vprof==0.36.1 MacOS 10.12.5 Python 3.5.2