Open garethky opened 2 weeks ago
The tests failed because of the numpy dependency. I guess I could move the check out of __init__
to get them to pass.
such a pity numpy is so bloaty and time consuming to just add it to requirements
such a pity numpy is so bloaty and time consuming to just add it to requirements
I'm not sure that this has a real world impact. I guess most, or at least a significant portion also use input shaping and have it installed anyway. Might be even better to add it to the requirements and call for a version <1.26 to avoid clashing with input shaping.
i think its the sheer time to compile numpy is the issue if the host is sub par, it can take hours on some
I would love to get on my "just require numpy" soap box (Binaries are installed automatically for 32bit, you don't need to compiling it. Its shockingly fast vs plain python. "Your job as a Python programmer is to write C") but... I want to ship this and be 'done' with the project. So what do I have to do?
I can push the numpy require check back to the point where its clear that you are configuring a probe. Most of what I'm using it for in this PR (average
, min
, max
, unique
) is relatively easy to replace or remove.
For probing, I need linalg.lstsq
. There are some optimizations I discussed with @KevinOConnor to replace the bulk of those calls, but I still have to make several calls to plain least squares. numpy/c is so much faster than Python that any sort of optimizations that we do might be washed away because they aren't in c (plus the code is very complicated). There is also, very very likely, a numpy/vectorized version of what we want to do that will crush the pure Python version (it has loops, its probably 50x slower than numpy). If this code is slow, it causes noticeable pauses.
Is there any way to provide / use a prebuilt v1.26 wheel?
Is there any way to provide / use a prebuilt v1.26 wheel?
For users: if you use Python 3.0 on a 32 bit RPI OS then PiWheels will just install a binary. That's the default behavior. You have to kinda work hard to not get that to happen. i.e. willfully use Python 2 or pick a 64 bit OS.
For the git build system: I'm not sure how that works.
For users: if you use Python 3.0 on a 32 bit RPI OS then PiWheels will just install a binary.
PyPi has some arm64 wheels as well
For the git build system: I'm not sure how that works.
There are examples of project who built the necessary wheels as CI actions. E.g. https://github.com/matrix-org/synapse/pull/14212/files
Maybe it would be an option to built the needed wheels (Python 2 and Python3?) as Klipper CI and the install scripts (or KIAUH) could automatically draw upon them.
This was updated to remove the numpy dependency
This PR adds the features needed to make a load cell actually work as a gram scale. This PR will let you weight filament or measure a force in grams. (no homing or probing yet!)
counts
to grams and make this force data available via unix socket and printer object statusThis code has been tested by several people in the community and some last minute bug fixes and changes were landed:
reverse
option allows the polarity of the force readings to be inverted. You can have your probing collision graphs in either orientation now.reference_tare_counts
is written to config and read as an integer. linkCALIBRATE
was used beforeTARE
A lot of work (months) and bug fixing went into parts of this, particularly
LOAD_CELL_DIAGNOSTIC
and theLoadCellSampleCollector
. These bits have to work when the sensor is buggy and still produce usable output.LoadCellSampleCollector
also underpins later work for probing.numpy
is required for this PR. We could change that. But PR's for probing will absolutely need it. If you want to merge[load_cell]
and[load_cell_probe]
then it seems there is no point in trying to keep it out of this PR.