Tosa95 / hx711

Raspberry Pi control module for hx711 load cell ADC
1 stars 1 forks source link

Let's get this up and running on my Pi Zero... #1

Open dgriggs opened 7 years ago

dgriggs commented 7 years ago

Okay I finally have time for this. I see you've laid out a good library so far. But I'm not sure how to properly compile and run this on my Pi. I've tried

python setup.py

but it cannot find the imports necessary. I'll keep working at it, but if I'm missing something obvious let me know.

EDIT: silly me! If you couldn't tell, this is my first time playing seriously with Python. You made a setup.py file, I just needed to run

python setup.py install

instead of just python setup.py.... facepalm

Tosa95 commented 7 years ago

Probably something wrong with wiring pi, try to follow this tutorial: http://wiringpi.com/download-and-install/

On May 26, 2017 8:00 PM, "David Griggs" notifications@github.com wrote:

Okay I finally have time for this. I see you've laid out a good library so far. But I'm not sure how to properly compile and run this on my Pi. I've tried

python setup.py

but it cannot find the imports necessary. I'll keep working at it, but if I'm missing something obvious let me know.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1N254EiupPBxIgeG6hzuC2XAzpSvks5r9xM4gaJpZM4Nn5Kt .

dgriggs commented 7 years ago

I tried compiling the .c with

Gcc -o hx711 hx711.c -lwiringPi

And wiringpi worked, but then it threw some other error at me. Ill check again later.


From: Davide Tosatto notifications@github.com Sent: Friday, May 26, 2017 1:20:26 PM To: Tosa95/hx711 Cc: David Griggs; Author Subject: Re: [Tosa95/hx711] Let's get this up and running on my Pi Zero... (#1)

Probably something wrong with wiring pi, try to follow this tutorial: http://wiringpi.com/download-and-install/

On May 26, 2017 8:00 PM, "David Griggs" notifications@github.com wrote:

Okay I finally have time for this. I see you've laid out a good library so far. But I'm not sure how to properly compile and run this on my Pi. I've tried

python setup.py

but it cannot find the imports necessary. I'll keep working at it, but if I'm missing something obvious let me know.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1N254EiupPBxIgeG6hzuC2XAzpSvks5r9xM4gaJpZM4Nn5Kt .

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/Tosa95/hx711/issues/1#issuecomment-304354182, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AMR6wITyFpYK88Un396rK1m8VnLB7nTvks5r9xfqgaJpZM4Nn5Kt.

Tosa95 commented 7 years ago

Ok I forgot to enable debug so the C file hasn't got a main.

You have to uncomment line 17 in the C file in order to define the DEBUG constant and have the main working.

I disabled it because I mainly use that in python and I don't need the C main.

But of course there will be more errors after that hahaha it's so strange how things works perfecty on a system and on another one thei fail so easily...

On May 26, 2017 11:34 PM, "David Griggs" notifications@github.com wrote:

I tried compiling the .c with

Gcc -o hx711 hx711.c -lwiringPi

And wiringpi worked, but then it threw some other error at me. Ill check again later.


From: Davide Tosatto notifications@github.com Sent: Friday, May 26, 2017 1:20:26 PM To: Tosa95/hx711 Cc: David Griggs; Author Subject: Re: [Tosa95/hx711] Let's get this up and running on my Pi Zero... (#1)

Probably something wrong with wiring pi, try to follow this tutorial: http://wiringpi.com/download-and-install/

On May 26, 2017 8:00 PM, "David Griggs" notifications@github.com wrote:

Okay I finally have time for this. I see you've laid out a good library so far. But I'm not sure how to properly compile and run this on my Pi. I've tried

python setup.py

but it cannot find the imports necessary. I'll keep working at it, but if I'm missing something obvious let me know.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1, or mute the thread https://github.com/notifications/unsubscribe-auth/ AJlH1N254EiupPBxIgeG6hzuC2XAzpSvks5r9xM4gaJpZM4Nn5Kt .

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/ Tosa95/hx711/issues/1#issuecomment-304354182, or mute the thread< https://github.com/notifications/unsubscribe-auth/ AMR6wITyFpYK88Un396rK1m8VnLB7nTvks5r9xfqgaJpZM4Nn5Kt>.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304392903, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1CnyM-HK0wL6oecSufxyS8932g9_ks5r90VTgaJpZM4Nn5Kt .

dgriggs commented 7 years ago

I know, right?

I'm going to try getting the C file up and running, then move on to Python just to learn more. Ill see what's wrong now and let you know haha.

dgriggs commented 7 years ago

Ok the library is running fine both in C and in Python. A lot of error values in the data stream, maybe 1/20. I might add a separate rejection or averaging algorithm in there to give users pre-made options to clean their data.

But, I'm still unclear on the C <-> Python relationship. You say "Python wrapper" as if your .py files still refer back to the core .c file of the library. Could you clarify this relationship a bit for me? Google has given me surprisingly little on the topic. I'll keep studying the code.

Also, I would like to add a live "tare" function directly to hx711.c, so that users can apply this library to a real life scale with a "tare" button, instead of having to set both offset and div separately from the main program. I can just fork your repo and create a pull request, correct?

Tosa95 commented 7 years ago

Have you installed the module hx711 through

sudo python setup.py install

?

You have to do that in the project directory. Remember the sudo

On May 27, 2017 03:47, "David Griggs" notifications@github.com wrote:

Ah, that's right, 'test.py' throws Import exception:

"No module named hx711"

I'm not sure I understand PYTHONPATH or sys.path. Since everything is in one directory I expected python to find everything automatically...

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304418633, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1JlheldrfPDbnW2ZdXjvlRw_m1Cqks5r94DMgaJpZM4Nn5Kt .

Tosa95 commented 7 years ago

A little explanation is necessary. The problem here is that the module hx711 is not exactly a python module. It's a C module compiled for python. When you type the command I wrote you the C module is compiled and installed on the whole machine.

So theoretically from that moment every python program on the machine can reference the module, but if you do not install it, it won't be found automatically.

There's a way to install it only in the working directory, you can google for that, I find it easier to install it for every one.

On May 27, 2017 1:39 PM, "Davide Tosatto" davide.tosatto95@gmail.com wrote:

Have you installed the module hx711 through

sudo python setup.py install

?

You have to do that in the project directory. Remember the sudo

On May 27, 2017 03:47, "David Griggs" notifications@github.com wrote:

Ah, that's right, 'test.py' throws Import exception:

"No module named hx711"

I'm not sure I understand PYTHONPATH or sys.path. Since everything is in one directory I expected python to find everything automatically...

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304418633, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1JlheldrfPDbnW2ZdXjvlRw_m1Cqks5r94DMgaJpZM4Nn5Kt .

dgriggs commented 7 years ago

Yep! Installed for the whole machine. Just not sure exactly how Python is "wrapping".

I'm also struggling with the toolchain, trying to get VS Code to allow me to directly edit RPi Zero files while also committing to the repo. I cannot get it to do both..

Tosa95 commented 7 years ago

For VSCode, I usually prefer doing git operations from the terminal through ssh. The advantage of this approach is that you learn git once and then you can use it for everything, for example matlab or also with the basic windows notepad. Moreover, i find the gui of most git plugins a mess. Not the one of vscode, that one is well done.

However, have you tried to mount the remote folder into the local workspace (through the plugin I mentioned ftp-remote or something like that. Just F1 type ftp and from the options select remote folder into workspace. It's experimental but works) and then using the git integration in VSCode?

For python wrapping it works like following: in the pythonizedhx711.c there is a list of the exported functions and the functions prototypes rewritten using the types defined in python.h. In the same header file there are functions for converting C types to Python types and vice versa. Then in the setup.py there is a list of the c files to compile, a list of the libraries to link (wiringPi in our case) and some descriptive strings. Then a python function is called to compile the whole thing. It's pretty straightforward.

And the errors? If there are any please send me the console log so I can check.

I'm sorry for the not very detailed explanation but I'm on a trip and I haven't got my laptop with me so I can't try what I say.

On May 28, 2017 12:47 AM, "David Griggs" notifications@github.com wrote:

Yep! Installed for the whole machine. Just not sure exactly how Python is "wrapping".

I'm also struggling with the toolchain, trying to get VS Code to allow me to directly edit RPi Zero files while also committing to the repo. I cannot get it to do both..

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304480647, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1MGyECmyv1A5gU2eJC7cFpbl4w34ks5r-KgbgaJpZM4Nn5Kt .

dgriggs commented 7 years ago

No problem! I don't mean errors in your code, I mean your code lets in bad data. I wrote a filter to remove bad values, so far it looks like it will be 99.99% good data, and doesn't seem to slow down the code much. I'll do a pull request soon and we can discuss when you're back from your trip!

Now back to figuring out how to do this without ruining your repo lol.

dgriggs commented 7 years ago

I've got my Pi connected so I can edit files remotely on my PC with VS Code, but I haven't figured out how to make VS Code simulate those libraries on Windows 10 (Meaning I have to compile in the RPi to debug...) or even my Linux laptop, so for now I'm just going to do what you said and use Git console only.

Tosa95 commented 7 years ago

I already wrote a filter based on the reading time, it simply throws away samples with a reading time over the average reading time by more than 25%, and it works perfectly on my Pi 2.

Do you say thay it gives spurious readings on Pi 0?

How much reading time average have you got (you can see it through calibrate.py)? I was near to 70 microseconds on the Pi2 and usually spurious readings were over 140 so they were easy to detect. Probably my filter needs more tuning...

On May 28, 2017 9:28 AM, "David Griggs" notifications@github.com wrote:

No problem! I don't mean errors in your code, I mean your code lets in bad data. I wrote a filter to remove bad values, so far it looks like it will be 99.99% good data, and doesn't seem to slow down the code much. I'll do a pull request soon and we can discuss when you're back from your trip!

Now back to figuring out how to do this without ruining your repo lol.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304498086, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1HhPh8u38ThB5Rv8IVt35qzyLDC_ks5r-SIagaJpZM4Nn5Kt .

dgriggs commented 7 years ago

The Pi Zero was averaging between 120 - 180 micros. Every 20-30 samples 1 would come in a factor of 10 too high, so I simply added a filter on top of yours; a separate function you can call if you want extra-filtered data. At the same time that filter averages up to 200 samples, making my weight reading much more stable / readable.

dgriggs commented 7 years ago

Currently struggling with command line git hahah, need to do a tutorial before I ruin anything... already botched a push on my branch..

dgriggs commented 7 years ago

I've tried so long to get this permission issue solved, have you ever had this git error when trying to commit or even just stage files for a commit?

error: insufficient permission for adding an object to repository database .git/objects

I've tried all kinds of chmod, chown, etc on my local .git folder recursively, but nothing seems to solve it.

Tosa95 commented 7 years ago

It seems strange to me, i reach a precision of 1/10 of gram on every sample. I computed the maximum difference on 300 samples and it was 0.00012 kg. Probably on pi zero there are more interruptions by os during readings because it has only one core and so higher probability of interruptions.

However averaging samples is not an option for me because I have a control problem with settling time of around 5-10 seconds and rounding on 200 samples I would have too much delay.

But I can use an external pic microcontroller if we can't solve. It's a pity that raspberry doesn't have real time features...

On May 28, 2017 10:15 AM, "David Griggs" notifications@github.com wrote:

The Pi Zero was averaging between 120 - 180 micros. Every 20-30 samples 1 would come in a factor of 10 too high, so I simply added a filter on top of yours; a separate function you can call if you want extra-filtered data. At the same time that filter averages up to 200 samples, making my weight reading much more stable / readable.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304499898, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1IbZY2VpJfxRcVQrVHfHjSUWkrwmks5r-S0-gaJpZM4Nn5Kt .

dgriggs commented 7 years ago

You can select any number of samples; I just picked 200 randomly as a maximum (array stack space). For me with a 40-50 sample rolling average it is still surprisingly responsive. But we'll talk more when you are back and when I finally have a grip on Git haha

I'm very curious to know what load cells you are using, my cells are cheap, and probably just give me dirtier voltage signals than yours. I'm trying to get that kind of precision, but I'm at about +/- 5g.... not great.

From the perspective of writing a library for strangers, we will want to test it on hardware people will commonly use, like Sparkfun's load cell amp and some cheap-ass chinese load cell pads haha.

I'm curious to buy other load cells regardless, as I've seen people talking about calibration factors (_div) of 1000 or more. Mine is at about 21....

Tosa95 commented 7 years ago

Haha perfect

On May 28, 2017 10:53, "David Griggs" notifications@github.com wrote:

You can select any number of samples; I just picked 200 randomly as a maximum (array stack space). For me with a 40-50 sample rolling average it is still surprisingly responsive. But we'll talk more when you are back and when I finally have a grip on Git haha

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304501468, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1DP-lsj5MoJWFdXHJtsxHGH905_Lks5r-TX3gaJpZM4Nn5Kt .

Tosa95 commented 7 years ago

Hi,

I read your branch, it's ok since you can avoid averaging, that's not ok for my application because also averaging on 20 salmples it gives a delay of 2 seconds (considering a sampling rate of 10 Hz) before the data respects the current value. So I won't use that, but it's an additional feature really useful if you are not working in real time applications, so ok.

The only bad thing is that you removed the setupHX711 method, but it's important.

Think about this: you have your system running and you want to calibrate it. The calibration task should be independent from the pins used. But, if you are forced to use the init method, you have to take in account the pinout in the calibration script, that's not ideal. The pinout information should be stored in a single file and used only once at the initialization of the system.

So i think that we should leave the possibility to change calibration info without affecting pinout. Obviously the name setup was not good for that method, something like changeCalibrationInfo should be ok.

For your filtering: it's ok, but you can speed up the averaging. Remember that, for the mathematical definition of the average, the sum of the data involved in the average computation is equal to the average multiplied for the number of data. So you do not need to sum up the old samples everytime.

You can: keep the old average, multiply by number of samples, subtract the sample that exits from the buffer, add the new one and divide again by the number of samples. This will make us able to void cycling on the whole data when the filtered data is requested. A drawback of this method is that it forces you to compute average for every sample, but it's not done through cycling, so that's not so bad.

Then, another issue: for every sample, you have to shift the buffer to have the space for new data. That's not ideal, in this case a double ended queue is the best compromise because you can add to the head with one instruction and remove from tail with one instruction, no cycles involved. This is ok because we do only sequential accesses to our buffer so I suggest to keep data in a deque instead of an array.

I think that this is all for now, thanks for your contributions

2017-05-28 10:57 GMT+02:00 Davide Tosatto davide.tosatto95@gmail.com:

Haha perfect

On May 28, 2017 10:53, "David Griggs" notifications@github.com wrote:

You can select any number of samples; I just picked 200 randomly as a maximum (array stack space). For me with a 40-50 sample rolling average it is still surprisingly responsive. But we'll talk more when you are back and when I finally have a grip on Git haha

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304501468, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1DP-lsj5MoJWFdXHJtsxHGH905_Lks5r-TX3gaJpZM4Nn5Kt .

dgriggs commented 7 years ago

Before I respond, if you don't mind, please tell me what load cells and HX711 configuration you are using. I'd like to know if you achieved better resolution and less erroneous data through less noise-prone hardware to begin with, or if I am doing something else different to get less resolution (+/- 5g) and a significant number of bad data reads from your getRaw function (sometimes -1 and sometimes 10x higher than normal). I'm using Sparkfun's HX711 breakout board, and some cheap flat half-bridges.

A rolling average is much more reasonable at my 80Hz than your 10Hz. I haven't yet checked how much noise is introduced going from 10hz to 80hz, so I'll set my HX711 back to 10Hz and compare to see if that is negating the benefits of averaging.

Yes, the name 'setup' threw me off, felt redundant since I didn't realize your calibration.py uses it. I'm planning to put a "tare" function to zero the scale after initializing; now I see there's a use for your function as well, so I'll put that back in. Then we will have an Init function for all four inputs, your changeCalibration for calibration.py, then my internal "tare" function to record 1 or more data samples, average it if > 1 sample, and set that as the new offset to zero the scale while in use without having to calibrate.

And many thanks for your input on efficiency, that's my weak spot in coding. I'm self taught, so Google often doesn't lead me to the best methods. I don't yet have a good feel for which methods are fastest for which compilers, and forums seem to have more opinions than facts on the matter. If you know of a definitive source for learning more feel free to refer it to me, I'd love to stop scanning forums and read some definitive text on it.

For now I'll work on the deque and finish fooling around with your python wrapper.

dgriggs commented 7 years ago

Your averaging suggestion is good, except that I would then have to add the outlier rejection also (those averages won't be any good to me with bad data mixed in them so I have to reject outliers first), which would mean we are right back where we started, running through each element of the array every time edge() is called. If there's a less expensive way to remove erroneous data before averaging, we could definitely still look at doing that. But for now I'm keeping that out of edge().

As for the deque, I think it may be even easier to just overwrite each array element in a FIFO style. Much faster than my first draft, but I don't know enough about deque to know if that's still faster.

Tosa95 commented 7 years ago

Of course we have to add outlier rejection in that way. But if you say that the bad samples are 10x the average we can simply throw away samples that are too different from the average. That is obviously a problem if the sensor can reach those values with good data. If that's the case, this approach won't work. But if it can be done, then we can get rid of the whole buffer.

For the buffer data structure I didn't catch your idea. Do you mean something like a circular array? That's one of the implementation of deques with the exception that in this case you can keep only one pointer because as soon as the buffer is full the head and tail will be the same. So if this is the idea you are building a simplified deque popping from tail by overwriting. Perfect.

On May 30, 2017 01:27, "David Griggs" notifications@github.com wrote:

Your averaging suggestion is good, except that I would then have to add the outlier rejection also (those averages won't be any good to me with bad data mixed in them so I have to reject outliers first), which would mean we are right back where we started, running through each element of the array every time edge() is called. If there's a less expensive way to remove erroneous data before averaging, we could definitely still look at doing that. But for now I'm keeping that out of edge().

As for the deque, I think it may be even easier to just overwrite each array element in a FIFO style. Much faster than my first draft, but I don't know enough about deque to know if that's still faster.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304743552, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1JC2KrvX2U4sLLkhRkmuCxckfw1iks5r-1RHgaJpZM4Nn5Kt .

Tosa95 commented 7 years ago

My configuration is the cheapest I found because my project is cost effective, so I bought a hx711 module and a 1kg load cell from Aliexpress haha it was like 4$ in total, so my hardware is probably not better than yours, but through calibrate.py I read with precision of 0.00013 kg, which is better than expected. The only thing to say is that sometimes, quite randomly, it starts with really high reading time (like 300 us) and I wasn't able to understand the reason. That's also because I don't have so much time now: exams are approaching.

However, before continuing filtering bad data, we have to be certain of the origin of the problem.

When I started coding, I found that the critical part was the timing in data reading. Do you see all that delayMicroseconds? They are the problem. Probably they work well on a pi2 because it is fast, but on pi0 can make our program break the time limits specified in the datasheet resulting in bad readings. I expected this kind of issue on rpi0.

My first idea is to get rid of delayMictoseconds function and try polling system time manually to respect datasheet timings. In fact I saw that the wiring pi delay function does not respect the time you pass, for example if you pass 1 us it can wait much more and however it is not so stable. Bad. And probably also worse on the slower pi0.

If this won't work (and it is possible because reading system time is a system call and so can be pretty slow) we can find out an adaptive way to make delays by cycling and doing useless operations.

With adaptive I mean that in the init function we do a fixed length cycle and we time how much it lasts, so we can compute the time for iteration and then have precise delays without involving system calls and keeping our code independent from the hardware.

Of course that init cycle should be long for example 100ms in order to reduce the impact of system time low precision on the computation of time per cycle.

This is probably the best idea but I don't know if I have time for that. If you can try, I would be grateful.

Thanks

On May 29, 2017 10:27 PM, "David Griggs" notifications@github.com wrote:

Before I respond, if you don't mind, please tell me what load cells and HX711 configuration you are using. I'd like to know if you achieved better resolution and less erroneous data through less noise-prone hardware to begin with, or if I am doing something else different to get less resolution (+/- 5g) and a significant number of bad data reads from your getRaw function (sometimes -1 and sometimes 10x higher than normal). I'm using Sparkfun's HX711 breakout board, and some cheap flat half-bridges https://www.amazon.com/dp/B01MS24DYF?psc=1.

A rolling average is much more reasonable at my 80Hz than your 10Hz. I haven't yet checked how much noise is introduced going from 10hz to 80hz, so I'll set my HX711 back to 10Hz and compare to see if that is negating the benefits of averaging.

Yes, the name 'setup' threw me off, felt redundant since I didn't realize your calibration.py uses it. I'm planning to put a "tare" function to zero the scale after initializing; now I see there's a use for your function as well, so I'll put that back in. Then we will have an Init function for all four inputs, your changeCalibration for calibration.py, then my internal "tare" function to record 1 or more data samples, average it if > 1 sample, and set that as the new offset to zero the scale while in use without having to calibrate.

And many thanks for your input on efficiency, that's my weak spot in coding. I'm self taught, so Google often doesn't lead me to the best methods. I don't yet have a good feel for which methods are fastest for which compilers, and forums seem to have more opinions than facts on the matter. If you know of a definitive source for learning more feel free to refer it to me, I'd love to stop scanning forums and read some definitive text on it.

For now I'll work on the deque and finish fooling around with your python wrapper.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304726144, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1NrkJhwFVLRCqtjAfofsjf2eEsrAks5r-yoTgaJpZM4Nn5Kt .

Tosa95 commented 7 years ago

For the question on optimization. That's not so easy. The first thing to keep in mind is that the compiler you are usinh shouldn't be taken in account when optimizing. Good algorithms perform well on all the compilers and in all languages.

Undetstanding the compiler logic is not straightforward and not necessary for the most of applications. If you are optimizing so hard that you have to consider the compiler probably it's better if you write pieces of code in assembly. But, if you are not an expert of that specific assembly language, you would end up with code slower than the one the compiler would have produced.

The things on which you should concentrate are data structures and algorithms.

But surprisingly data structures are more important because, chosen the right data structure, the algorithm will follow.

The most important topics are:

Stacks Queues Deques Linked lists Trees Graphs (with all types of search and expecially with dijkstra algorithm) Hash Tables

If you are working with signal then you should at least know of the FFT, not in details, but you should know of its existence, there are a lot of implementations for every hardware.

The same for sorting algorithms. It's ok to know how they perform in best and worst case but details on algorithm are not needed, they are already implemented.

If you deal with LP problems you should know about branch and cut, becausr self made algorithms usually lead to wrong results.

I've a good book about data structures and algorithms in C, as soon as I will be at home I will write you title and author.

Feel free to ask more.

On May 29, 2017 10:27 PM, "David Griggs" notifications@github.com wrote:

Before I respond, if you don't mind, please tell me what load cells and HX711 configuration you are using. I'd like to know if you achieved better resolution and less erroneous data through less noise-prone hardware to begin with, or if I am doing something else different to get less resolution (+/- 5g) and a significant number of bad data reads from your getRaw function (sometimes -1 and sometimes 10x higher than normal). I'm using Sparkfun's HX711 breakout board, and some cheap flat half-bridges https://www.amazon.com/dp/B01MS24DYF?psc=1.

A rolling average is much more reasonable at my 80Hz than your 10Hz. I haven't yet checked how much noise is introduced going from 10hz to 80hz, so I'll set my HX711 back to 10Hz and compare to see if that is negating the benefits of averaging.

Yes, the name 'setup' threw me off, felt redundant since I didn't realize your calibration.py uses it. I'm planning to put a "tare" function to zero the scale after initializing; now I see there's a use for your function as well, so I'll put that back in. Then we will have an Init function for all four inputs, your changeCalibration for calibration.py, then my internal "tare" function to record 1 or more data samples, average it if > 1 sample, and set that as the new offset to zero the scale while in use without having to calibrate.

And many thanks for your input on efficiency, that's my weak spot in coding. I'm self taught, so Google often doesn't lead me to the best methods. I don't yet have a good feel for which methods are fastest for which compilers, and forums seem to have more opinions than facts on the matter. If you know of a definitive source for learning more feel free to refer it to me, I'd love to stop scanning forums and read some definitive text on it.

For now I'll work on the deque and finish fooling around with your python wrapper.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304726144, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1NrkJhwFVLRCqtjAfofsjf2eEsrAks5r-yoTgaJpZM4Nn5Kt .

dgriggs commented 7 years ago

Ah, your 1kg load cell explains the higher accuracy then. These cells are typically rated by %FS (full scale), so even if our cells have the same accuracy, my 50kg will still be 50x less accurate than your 1kg cell. That makes sense. Good to know though that stable data is possible with cheap hardware. I'll look into the timing issue, weirdly the errors (often show up as -1, or 1662277 or something like that, or otherwise between 20% and 10x higher than normal) often come in at the low end of readAverage(). I am tempted at this point to maybe introduce 2 more arguments that put up hard limits to the data (reject if < 0 or > 500,000 or something like that). Then let the users change them if they want stricter limits or loosen them up for different hardware...? I don't like the idea for some reason, but it does seem more simple than outlier rejection + averaging.

And I agree, I'd rather solve (what is most likely) a hardware problem instead of jumping through software hoops to "put a bandaid on it." My timing has ended up more like 180-210 micros on average. But it is truly weird that -1's aren't any different timing wise. Also for a few minutes I disabled your timing filter and there were no unusually long reads... Not sure how many you saw from your Pi2 that got filtered out, but it appears that filter isn't doing anything for me, at least for ~1000 or so points of data.

Tosa95 commented 7 years ago

From my pi2 I filtered out 1 sample every 50-100 with the timing filter and all the others samples were accurate.

Try to modify delays as I suggested and see if reading time lowers.

However, It seems strange that no samples are thrown away because interuptions occurs. Try to lower the tolerance on time.

On May 30, 2017 9:37 AM, "David Griggs" notifications@github.com wrote:

Ah, your 1kg load cell explains the higher accuracy then. These cells are typically rated by %FS (full scale), so even if our cells have the same accuracy, my 50kg will still be 50x less accurate than your 1kg cell. That makes sense. Good to know though that stable data is possible with cheap hardware. I'll look into the timing issue, weirdly the errors (often show up as -1, or 1662277 or something like that, or otherwise between 20% and 10x higher than normal) often come in at the low end of readAverage().

And I agree, I'd rather solve (what is most likely) a hardware problem instead of jumping through software hoops to "put a bandaid on it." My timing has ended up more like 180-210 micros on average. But it is truly weird that -1's aren't any different timing wise. Also for a few minutes I disabled your timing filter and there were no unusually long reads... Not sure how many you saw from your Pi2 that got filtered out, but it appears that filter isn't doing anything for me, at least for ~1000 or so points of data.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-304799088, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1Gz_AAunOVtBEMSzQ2Wy27kTQNjAks5r-8c2gaJpZM4Nn5Kt .

dgriggs commented 7 years ago

Have you observed data shifts over longer periods of time? I set mine on overnight and it shifted from 570g to 615g over 12 hours... Putting it back to 80hz later to try another longer test.

Tosa95 commented 7 years ago

I've never tried for more than 10 minutes.

It is strange, if you stop and restart the program does it fall back to original values?

If not, it can be caused by temperature changes since resistors are affected from that. Try to heat your cell and see how much it changes.

However, that seems to be too much for being caused by thermal excursion.

On May 30, 2017 22:46, "David Griggs" notifications@github.com wrote:

Have you observed data shifts over longer periods of time? I set mine on overnight and it shifted from 570g to 615g over 12 hours... Putting it back to 80hz later to try another longer test.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Tosa95/hx711/issues/1#issuecomment-305003213, or mute the thread https://github.com/notifications/unsubscribe-auth/AJlH1APajxa7347uLlHBcXolrB-QR-hEks5r_IA9gaJpZM4Nn5Kt .

Tosa95 commented 7 years ago

I made some modification on master branch.

Specifically:

Then:

Tosa95 commented 7 years ago

Oh and I thought about how to solve the heating problem to always have accurate readings: we can add to the library the support for a temperature sensor that will be glued (with thermal paste or something similar) directly on the load cell. Then we can adjust read data after having extracted the real correlation with temperature through the calibration script. It will obviously involve manual heating or cooling.

But I think this specific feature can be left behind for now, at least until we can reach the goal of having no bad samples on every Pi.