Open jakirkham opened 1 month ago
Seems like someone introduced a bad/expensive test in the test suite.
The runners do seem to be low on memory; the end of the log says:
Maximum memory usage observed: 2.8G
which isn't totally unreasonable.
Thanks Ralf! 🙏
It could be
Have seen similar warnings on other feedstock builds recently (admittedly not all feedstock builds have this issue). So am wondering if there are also changes on the CI side (like image size), which have pushed some feedstocks closer to the memory limit
I don't think it's numpy specific. I see these warnings across many feedstocks. It's likely to me that Azure introduced more visible/noisy warnings about this, without any real underlying change. Alternatively, perhaps they down-sized the agents somehow. But as long as jobs are not failing due to getting OOM-killed, I don't think there's something to do right now.
Yes am thinking Azure made some kind of change to the workers. See NumPy more as a canary in the coal mine
Though it is quite high. The warnings above are approaching 97% memory usage. Have in some cases needed to restart because of this
Think we will need to do something about this. Though it probably will not be NumPy specific (unless I've missed some test detail that Ralf alluded to above)
We are seeing several Windows jobs here struggling with memory usage. Here is a screenshot of several
Looking at one log, we see warnings like this: