Open adeebshihadeh opened 5 months ago
A good place to optimize is the hypothesis example generation. Speeds are ~2x faster than our current version in version 4.51.0 and get slower somewhere between then and 4.55.3. The latest version (6.102.6) is even slower compared to our current version (6.47). You can root cause and open a PR to hypothesis or openpilot depending on where the optimization lies.
Another option is to optimize get_fuzzy_car_interface_args by caching params_strategy to avoid regenerating strategies each time. Additionally, drawing st.dictionaries is quite slow; according to my tests, it can take up to 0.3 seconds.
Another option is to optimize get_fuzzy_car_interface_args by caching params_strategy to avoid regenerating strategies each time.
I tried this and it didn't seem to work.
Additionally, drawing st.dictionaries is quite slow; according to my tests, it can take up to 0.3 seconds.
Right so it's likely a problem in the underlying library
Is this bounty still available? I'm looking to work on it but am confused if it has already been claimed by one of the PRs above
@adeebshihadeh, is this still open?
I'm interested in working on this but it's looking like this bounty has been (partially) solved? Is the bounty still open to go from~0.7s to <= 0.2s?
Getting this result on a AMD Ryzen Threadripper PRO 3955WX 16-Cores
:
Using --randomly-seed=4229886755
rootdir: /home/batman/openpilot
configfile: pyproject.toml
plugins: timeout-2.3.1, xdist-3.6.1, cov-5.0.0, mock-3.14.0, asyncio-0.24.0, sugar-1.0.0, randomly-3.15.0, cpp-2.6.0, flaky-3.8.1, hypothesis-6.47.5, anyio-4.4.0, subtests-0.13.1, repeat-0.9.3
asyncio: mode=Mode.STRICT, default_loop_scope=function
16 workers [211 items] collecting ...
selfdrive/car/tests/test_car_interfaces.py ✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 60% ██████
✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 100% ██████████
================================================================================= slowest 10 durations ==================================================================================
1.27s call selfdrive/car/tests/test_car_interfaces.py::TestCarInterfaces::test_car_interfaces_001_ACURA_RDX
1.22s call selfdrive/car/tests/test_car_interfaces.py::TestCarInterfaces::test_car_interfaces_032_FORD_MUSTANG_MACH_E_MK1
1.20s call selfdrive/car/tests/test_car_interfaces.py::TestCarInterfaces::test_car_interfaces_027_FORD_EXPLORER_MK6
1.17s call selfdrive/car/tests/test_car_interfaces.py::TestCarInterfaces::test_car_interfaces_026_FORD_ESCAPE_MK4
1.16s call selfdrive/car/tests/test_car_interfaces.py::TestCarInterfaces::test_car_interfaces_028_FORD_FOCUS_MK4
1.16s call selfdrive/car/tests/test_car_interfaces.py::TestCarInterfaces::test_car_interfaces_042_GENESIS_GV80
1.15s call selfdrive/car/tests/test_car_interfaces.py::TestCarInterfaces::test_car_interfaces_017_CHEVROLET_VOLT
1.14s call selfdrive/car/tests/test_car_interfaces.py::TestCarInterfaces::test_car_interfaces_047_HONDA_CIVIC_2022
1.12s call selfdrive/car/tests/test_car_interfaces.py::TestCarInterfaces::test_car_interfaces_025_FORD_BRONCO_SPORT_MK1
1.12s call selfdrive/car/tests/test_car_interfaces.py::TestCarInterfaces::test_car_interfaces_202_VOLKSWAGEN_POLO_MK6
Results (14.30s):
211 passed
batman@workstation-shane:~/openpilot$
So to achieve the original goal, we need to see about a ~5-6x speedup still.
Getting this result on a
AMD Ryzen Threadripper PRO 3955WX 16-Cores
:
I ran on m3 MBP—appreciate the clarification!
Takes ~1-2s per car. That's a big hit since we support nearly 300 different cars.
Goal is <=0.2s avg and <1s max on the CI machine, with the same test coverage. There's likely two or three things slowing this test down that'll be obvious with a bit of profiling.
Command to run:
Current output on my workstation