simon-weber / python-libfaketime

A fast time mocking alternative to freezegun that wraps libfaketime.
GNU General Public License v2.0
73 stars 15 forks source link

performance issue in v2.1.0 ? #81

Open jgb opened 5 months ago

jgb commented 5 months ago

Hello,

when upgrading from v2.0.0 to v2.1.0 our suite of pytests which usually takes around 10 minutes, runs for many hours without finishing. I've narrowed it down to the fact that upgrading to v2.1.0 of libfaketime seems to be the cause.

Any idea what might be going on here?

Greetings,

jgb

azmeuk commented 5 months ago

Hi. Thank you for the report. For additional context, on which system do you run your testsuite? Do you observe the same behavior on different systems? We updated the underlying libfaketime version as well as changing a bit of code. To be sure of which one is causing the issue, would you try to run your testsuite, with the current python-libfaketime codebase, and the previous libfaketime version?

git clone https://github.com/simon-weber/python-libfaketime.git
cd python-libfaketime
git clone https://github.com/simon-weber/libfaketime.git libfaketime/vendor/libfaketime
make -C libfaketime/vendor/libfaketime

Then install this version in your project environment and run your testsuite:

cd your/project/path
pip install the/path/to/python-libfaketime -U --force
pytest
jgb commented 5 months ago

@azmeuk thanks, I did what you asked, and then I don't observe the performance regression. This is on Ubuntu 24.04 LTS with python 3.12.

vlaci commented 5 months ago

I've run into the same issue. I could work around by setting FAKETIME_FORCE_MONOTONIC_FIX to 0.

azmeuk commented 5 months ago

@vlaci what is your OS? @jgb do you confirm FAKETIME_FORCE_MONOTONIC_FIX make a change for you?

If so, I don't know if we would want to enable it by default, as it is discouraged by the libfaketime documentation:

  Please try to avoid compiling with FORCE_MONOTONIC_FIX on platforms that
  do not need it. While it won't make a difference in most cases, depending
  on the specific FAKETIME settings in use, it would cause certain
  intercepted functions such as pthread_cond_timedwait() return with a
  time-out too early or too late, which could break some applications.
jgb commented 4 months ago

@azmeuk I can confirm, just tested v2.1.0 with FAKETIME_FORCE_MONOTONIC_FIX=0, that brings performance back to a normal level. Without that exported variable, performance becomes a disaster.

azmeuk commented 4 months ago

@simon-weber do you have any opinion about disabling FAKETIME_FORCE_MONOTONIC_FIX by default?

vlaci commented 4 months ago

@vlaci what is your OS? @jgb do you confirm FAKETIME_FORCE_MONOTONIC_FIX make a change for you?

If so, I don't know if we would want to enable it by default, as it is discouraged by the libfaketime documentation:

  Please try to avoid compiling with FORCE_MONOTONIC_FIX on platforms that
  do not need it. While it won't make a difference in most cases, depending
  on the specific FAKETIME settings in use, it would cause certain
  intercepted functions such as pthread_cond_timedwait() return with a
  time-out too early or too late, which could break some applications.

I am on NixOS (Linux) with glibc 2.39

simon-weber commented 4 months ago

It looks like this may be a regression in libfaketime 0.9.10. Maybe we downgrade our vendored libfaketime instead? From a quick look over the FORCE_MONOTONIC_FIX discussions I'm not sure about the safety of disabling it.

azmeuk commented 3 months ago

@jgb @vlaci can you check if #82 solves the issue for your use cases?

jgb commented 3 months ago

@jgb @vlaci can you check if #82 solves the issue for your use cases?

hello, I tried, but it fails to build / install...

azmeuk commented 3 months ago

@jgb What error message do you see? Did you install with pip install git+https://github.com/simon-weber/python-libfaketime@refs/pull/82/head?

jgb commented 3 months ago

@jgb What error message do you see? Did you install with pip install git+https://github.com/simon-weber/python-libfaketime@refs/pull/82/head?

Sorry, I must have done something wrong initially, using your command it installed just fine. Tried it out just now, even though the result isn't as bad as it was, it still makes my test suite go from taking 5 minutes before, and now 12+ minutes. Also it makes about 20 of my tests fail, which don't fail using v2.0.0 and those failing tests are all related to selenium + chrome...

vlaci commented 3 months ago

@jgb @vlaci can you check if #82 solves the issue for your use cases?

I can confirm that the PR works. It indeed looks a bit slower. In my case, it's about 10%

azmeuk commented 3 months ago

Thank you both for you feedback.

Tried it out just now, even though the result isn't as bad as it was, it still makes my test suite go from taking 5 minutes before, and now 12+ minutes.

I can confirm that the PR works. It indeed looks a bit slower. In my case, it's about 10%

The current benchmark.py script does not show different behaviors for python-libfaketime 2.1.0 and libfaketime 0.9.8, 0.9.9 and 0.9.10 with the master branch. I could not test 0.9.7 because it won't compile on my machine. This is too bad because this is the closest version to simon's libfaketime fork that was used in python-libfaketime.

However, benchmark.py with python-libfaketime 2.0.0 show better results than 2.1.0 (something like 40%), so the good news is that this is reproducible. It seems the fault should lie either on the libfaketime 0.9.7 to 0.9.8+ upgrade, or more probably in my recent changes.

Just to be sure, can you check if libfaketime 0.9.8 improve perfs in comparison to 0.9.9 with pip install git+https://github.com/simon-weber/python-libfaketime@refs/pull/82/head :pray: ?

Also it makes about 20 of my tests fail, which don't fail using v2.0.0 and those failing tests are all related to selenium + chrome...

Do the tests you are referring to also fail with python-libfaketime 2.1.0 (i.e. with libfaketime 0.9.10) or just with #82 (i.e. with libfaketime 0.9.9)?

vlaci commented 3 months ago

Just to be sure, can you check if libfaketime 0.9.8 improve perfs in comparison to 0.9.9 with pip install git+https://github.com/simon-weber/python-libfaketime@refs/pull/82/head 🙏 ?

I assume you wanted me to check #83.

It was more of a hassle to install, as I needed to explicitly set CFLAGS=-Wno-error=unused-variable to make it build. The performance seems to be back to normal though.

jgb commented 3 months ago

Do the tests you are referring to also fail with python-libfaketime 2.1.0 (i.e. with libfaketime 0.9.10) or just with #82 (i.e. with libfaketime 0.9.9)?

I didn't actually manage to test this: v2.1.0 brings my whole machine to a halt, I killed the pytest processes after a few hours.

azmeuk commented 2 months ago

It was more of a hassle to install, as I needed to explicitly set CFLAGS=-Wno-error=unused-variable to make it build. The performance seems to be back to normal though.

@jgb do you also see nominal perfs with #83?

@simon-weber what do you think of all of this? Should we downgrade?

jgb commented 2 months ago

CFLAGS=-Wno-error=unused-variable

Hi @azmeuk I just tested #83 and I can confirm it's slower by a few orders of magnitude compared to v2.0.0. Not AS slow as v2.1.0 but still so slow that it becomes unworkable.

simon-weber commented 2 months ago

Hm. Maybe we can get way with fixing this with FAKETIME_FORCE_MONOTONIC_FIX? I see comments in the libfaketime thread about it breaking things for java, but maybe we don't need to worry about it since we're only running against python.