Closed grlee77 closed 2 years ago
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
I don't think you added your patch.
This looks like the right thing, I should close #86. Or we need to decide which one should be merged into the other. It might be easier to simply add a call to rm_pyx_assoc_c_cpp.sh
in the build script on this PR.
This looks like the right thing, I should close #86. Or we need to decide which one should be merged into the other. It might be easier to simply add a call to rm_pyx_assoc_c_cpp.sh in the build script on this PR.
Yeah, I can squash a lot of the existing commits and add it here.
I'm not sure how to efficiently remove the cython-generated files in a Windows batch script so I skipped that for now. The intention for 0.19.1 is to not have these files in the source tarball in the first place.
There are at least a couple of actual non-Cython generated .c
files so we can't just do a global removal of all .c
files.
@erykoff, if you care about co-author credit and have an email you want to associate with your commits, let me know and I will ammend the commit messages. Thanks for helping out here!
No problem, I don't need any credit. This is just part of my very long quest to get https://github.com/scikit-image/scikit-image/pull/5249 released... 😄
Oh no what happened to the windows build? One (new) test failure?
Oh no what happened to the windows build? One (new) test failure?
I just double checked that I didn't mess anything up in the rebase here. A diff between this branch and #85 shows only the change to the rank filter patch and the build.sh cleanup command (i.e. nothing that should be related to this failure). There was some similar failure on Windows + PyPy in the other branch, but not sure if it ever occurred on the Cpython builds.
I'm pretty sure this is new ... the win64 pypy failure has been there since the recent py310 migration. Don't know if it's worth restarting the build. But at least osx_64 is all passing!
There is also a random_state
optional kwarg to the skimage.measure.ransac
function used in the failing test. Perhaps we need to also set that in these test cases to make sure we always get a consistent result? It looks like other tests in the same file do set random_state=1
in the ransac calls.
Good news is that the non-pypy windows builds passed! Bad news is that the travis ppc64le took too long and errored during testing. Do the tests need to be turned off for those?
I think we can restart teh builds with travis when they fail on master.
I"m not going to be able to debug pypy failures. We could just skip the tests and leave it as a known failure. That way at least people can use part of scikit-image. We could report them in the feedstock/scikit-image but I don't think I want to spend more time than "reporting" them for now.
I"m not going to be able to debug pypy failures. We could just skip the tests and leave it as a known failure. That way at least people can use part of scikit-image. We could report them in the feedstock/scikit-image but I don't think I want to spend more time than "reporting" them for now.
That sounds fine to me. Do you know the syntax to enable this skip? Can we use a skip:
key under the tests section like one can do for build? (not sure the name of the appropriate env. var to specify)
I will try using the existing style of skipping just the specific failing cases:
{% set tests_to_skip = tests_to_skip + " or test_hough_ellipse_zero_angle or test_hough_ellipse_non_zero" %} # [python_impl == 'pypy']
Hi! This is the friendly automated conda-forge-linting service.
I wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found some lint.
Here's what I've got...
For recipe:
<two spaces>#<one space>[<expression>]
form. See lines [89]Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
Okay, looks like this PyPy test skip worked.
aside from a couple of timeouts this looks good, so is probably ready to merge will go ahead and update the comment as suggested above now
A new test failure ðŸ˜
def test_max_batch_size():
"""Small batches are slow, large batches -> large allocations -> also slow.
https://github.com/scikit-image/scikit-image/pull/6035#discussion_r751518691
"""
coords = np.random.randint(low=0, high=1848, size=(40000, 2))
tstart = time.time()
ensure_spacing(coords, spacing=100, min_split_size=50,
max_split_size=2000)
dur1 = time.time() - tstart
tstart = time.time()
ensure_spacing(coords, spacing=100, min_split_size=50,
max_split_size=20000)
dur2 = time.time() - tstart
> assert dur1 < dur2
E assert 5.904011011123657 < 5.62450909614563
Is this a failure because we're testing something with the CI machines which have variable load, or because there's a random generation without a seed? And does this need to be patched before merge?
Is this a failure because we're testing something with the CI machines which have variable load, or because there's a random generation without a seed? And does this need to be patched before merge?
I saw this once in CI on the main repo and it passed upon re-running. It is a pretty newly added test case. We should perhaps relax the test condition a bit to account for variable load and architecture differences to avoid such sporadic failures. Mainly we wanted to ensure that the default choice of batch size (2000) is not substantially worse than a much larger default choice.
something like dur1 < 1.25 * dur2
or dur1 < 1.5 * dur2
. I will make a PR upstream and we can add another patch here...
I think it is okay to merge with that as-is, though. It is not a bug that would affect users, just an indication that the choice of default batch size may not be computationally optimal.
One failure here that i haven't seen before. Not sure if it is just a fluke
viewer\tests\test_widgets.py sssssss [100%]
================================== FAILURES ===================================
_________________________________ test_uint8 __________________________________
def test_uint8():
> plt.figure()
self._tkloaded = False
# to avoid recursions in the getattr code in case of failure, we
# ensure that self.tk is always _something_.
self.tk = None
if baseName is None:
import os
baseName = os.path.basename(sys.argv[0])
baseName, ext = os.path.splitext(baseName)
if ext not in ('.py', '.pyc'):
baseName = baseName + ext
interactive = 0
> self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
E _tkinter.TclError: Can't find a usable init.tcl in the following directories:
E {%PREFIX%\tcl\tcl8.6}
E
E D:/bld/scikit-image_1638988643419/_test_env/tcl/tcl8.6/init.tcl: couldn't read file "D:/bld/scikit-image_1638988643419/_test_env/tcl/tcl8.6/init.tcl": No error
E couldn't read file "D:/bld/scikit-image_1638988643419/_test_env/tcl/tcl8.6/init.tcl": No error
E while executing
E "source D:/bld/scikit-image_1638988643419/_test_env/tcl/tcl8.6/init.tcl"
E ("uplevel" body line 1)
E invoked from within
E "uplevel #0 [list source $tclfile]"
E
E
E This probably means that Tcl wasn't installed properly.
..\_test_env\lib\tkinter\__init__.py:2270: TclError
@conda-forge-admin please restart cis
This PR continues on from #85, applying the proposed patch from scikit-image/scikit-image#6103
Checklist
0
(if the version changed)conda-smithy
(Use the phrase code>@<space/conda-forge-admin, please rerender in a comment in this PR for automated rerendering)