Open ajgdls opened 2 years ago
on 1st December the tests will be re-run against latest PyPI versions and then published
@ajgdls Are you willing to run your tests before this "final exam"? I have some changes (#90) to both P4P and the PVXS library which I would like to see results for prior to merging. Your feedback would help me decide if the API additions are worthwhile.
I've arranged things so that both PRs can be built locally via pip. I suggest a clean virtualenv, or explicit uninstall of any pypi.org builds. The build machine should only need python and a supported c++ compiler (eg. gcc >= 4.8 circa RHEL7)
pip install -v --no-binary epicscorelibs,pvxs,p4p git+https://github.com/mdavidsaver/p4p@net-opt
Hi @mdavidsaver of course! And it isn't an exam :) , we decided to put a date to hopefully encourage participation but not really to give hard deadlines. I'm more than happy for people to re-run or improve the tests anytime against any future versions if they see a benefit. I should be able to run your tests today and I'll feed back to you.
I had a quick go with your updates and it improves by about 10%. I tried a bit of profiling with py-spy top
and got:
# py-spy top -- python -m EPICSPyClientPerformance -c p4p -r 1000
Collecting samples from 'python -m EPICSPyClientPerformance -c p4p -r 1000' (python v3.8.15)
Total Samples 2700
GIL: 1.00%, Active: 14.00%, Threads: 5
%Own %Total OwnTime TotalTime Function (filename)
1.00% 1.00% 1.10s 1.46s pop (p4p/client/raw.py)
2.00% 2.00% 0.740s 0.750s cpu_percent (psutil/__init__.py)
0.00% 0.00% 0.440s 0.470s notify (threading.py)
0.00% 0.00% 0.280s 0.280s __str__ (p4p/nt/scalar.py)
0.00% 1.00% 0.230s 0.560s add_sample (EPICSPyClientPerformance/monitor_client.py)
0.00% 0.00% 0.170s 0.170s _store (p4p/nt/scalar.py)
0.00% 2.00% 0.100s 2.39s handle (p4p/util.py)
0.00% 0.00% 0.090s 0.260s unwrap (p4p/nt/scalar.py)
0.00% 0.00% 0.080s 0.760s _event (p4p/client/thread.py)
0.00% 0.00% 0.080s 0.610s put (queue.py)
0.00% 0.00% 0.080s 0.340s unwrap (p4p/nt/__init__.py)
0.00% 0.00% 0.080s 0.150s get (queue.py)
0.00% 0.00% 0.070s 0.070s isEnabledFor (logging/__init__.py)
1.00% 1.00% 0.060s 0.130s debug (logging/__init__.py)
cpu_percent
can be ignored as it is in another thread, but the rest is what is being profiled.
Hi @mdavidsaver I ran the tests with your PRs installed into a clean virtual environment and got the following
2022-11-04 14:03:08,604 CPU: Mean 66.69055118110224 2022-11-04 14:06:06,354 CPU: Mean 68.33109243697467 2022-11-04 14:13:56,462 CPU: Mean 68.03557692307683
I ran a few times to ensure the results were consistent
Hello, I've written a small Python module https://github.com/ajgdls/EPICSPyClientPerformance to compare the performance of 6 Python based EPICS clients. I have attempted to test each in a consistent way, and the README has some preliminary results for monitoring 1000 records at 10Hz for 1000 samples.
I would really appreciate it if this module owner would review EPICSPyClientPerformance test code for this client implementation to check that I'm actually testing in the correct and most efficient way. I would be happy to make updates that shows an increase in performance for this client.
This testing is driven by Diamond Light Source, and on 1st December the tests will be re-run against latest PyPI versions and then published (probably tech-talk) for further discussion, so if anyone wants to push performance optimisations they will get picked up on that date.
Thanks for taking a look, looking forward to hearing from everyone.