Ralim / IronOS

Open Source Soldering Iron firmware
https://ralim.github.io/IronOS/
GNU General Public License v3.0
7.14k stars 712 forks source link

Missing option: Tip gain #872

Closed Sinclair-ZX81 closed 3 years ago

Sinclair-ZX81 commented 3 years ago

This is a Request

@Ralim - your firmware is brilliant - please can you add this very important feature again? Many thanks! :O)

Ralim commented 3 years ago

Hi,

This was removed as the tip model was changed from using a simple to gain to using an actual response curve; thus there is no "gain" anymore as it varies over the temperature range.

Sinclair-ZX81 commented 3 years ago

Hi Ben,

well, beside of the offset calibration in cold state, how is it possible to compensate tip and/or electronic tolerances in heated up condition?

I have used your FW v2.06-RC4 for a very long time, as to me the setup option "tip model" -> "user tuned" + "advanced calibration" was the very best solution. That way I achived spot-on tip temperatures in range of 260 ... 360°C with only +/- 2 °C deviation to the set / shown temperatur on the display (FG-100 verified).

I am working a lot with optical sensors and display calibration. It is my experience, that it is no good idea to have just a single baseline correction data, not even in a 1D system. Too much deviations can take effect when it comes to operation on upper specification limits (for the iron -> highest possible temp). So in display calibration a LUT (based on multi-point readings) still is the best solution if your goal is a perfect calibration with lowest deltaE. In your case you should provide at least a 2 point calibration (at room temp and at 360°C).

So in terms of the TS100, you shouldn't rely on a single point calibration followed by calculations using the response curve, as you never will know, how accurate the response will be (cheap sensors, etc). As said, my two TS100 have deviations of up to 15°C when set at 320°C, if only offset-calibrated on room temp.

So long story short: Next to the room temp reading you should also a reference reading on high temperature (e. g. 360°C) taking into account to the response curve calculation. Of course, that 2nd reading is just for users having a FG-100 or something similar. If the user don't have this chance, the FW should handle it as by now (ignoring the 2nd reference).

KR, Harry

Ralim commented 3 years ago

Hello,

When you were measuig with the fg100, were you testing 5 C increments from 100-400C? I sincerely doubt you were measuring that tolerance across the operating range.

The tips that are used have a non standard uV/C response curve that is most definitely not linear.

The old code assumed linearity as it was "good enough" most of the time. While this was non-optimal it was easier for people to work with.

A while back is at down and measured the uV/C on 6 tips on two irons to build a reasonably accurate uV/C lookup table.

This started from the graph in the HAKKO patent on this style of tip construction. And then slowly build up our own lookup table that was a tad closer to the actual tip measurements.

Remembering that these are only really about 1-2% over time in terms of repeatability. If you need absolutely precise 1C accuracy, using a soldering iron that is designed down to a price is not reccomended.

Keeping in mind that the entire signal path gor temperature control is temperature sensitive and we do not have enough temp sensor coverage to be able to model both tolerance and thermal noise it indeed would make sense to do a multipoint calibration.

However, as the response curve is not linear, it would need to be a calibration at 50-400 in ~25C increments.

This would be both a bunch of work and effort; and non trivial for a user to setup and perform.

Remember this is not a screen where you want perfect colours as you have no feedback loop, soldering irons always allow you to adjust the temperature up and down if the solder is not melting. Seeing as in general when soldering you are also moving between different board stackups and ground plane you are most likely inclined to nudge the temperature up and down occasionally to pre-emptively adjust the amount of heat in the tip thermal mass.

Also the response of the tip is extremely repeatable across tips in my testing so far (excluding damaged tips), and most of my variation is in the handle parts as some of these have different tolerance resistors, LDO, and ADC calibrations.

The point of the single zero point calibration is to null out ADC offset, op-amp offset and LDO a little.

This still leaves gain errors from the op-amp and associated resistors. This also completely ignored the small gain variation between tips of different construction.

If we were to add a second temperature calibration point; how should that adjust the LUT?

There is still an open GitHub issue looking at different solutions to this; and there is still the ongoing ideal vs practical limitations.

Sinclair-ZX81 commented 3 years ago

Hi Ben:

When you were measuig with the fg100, were you testing 5 C increments from 100-400C?

Hmmm ... do you have any solder tin with a melting point below 200 °C for electronic work?

No, I didn't check it in 5 °C increments but 10 °C, because I almost never set the temperature to any other value than x10°. I started with 260°C as I have no low melting point solder tin. As said, when calibrated using your FW 2.06 RC4 the FG-100 check result was within +/- 2°C in range of 260 ... 360 °C.

Of course Miniware irons are not really high end gear. So I see no need to get them absolutely accurate down to 1 °C deviation at every point in a 260 ... 400 °C range. However I was surprised of the achieved accurateness of my 2 original miniware TS100 when calibrated using FW 2.06 RC4 !

A while back is at down and measured the uV/C on 6 tips on two irons to build a reasonably accurate uV/C lookup table. This started from the graph in the HAKKO patent on this style of tip construction. And then slowly build up our own lookup table that was a tad closer to the actual tip measurements.

No doubt, that will work for your irons / tips, but you can not assume, that it will work for all the other gear out there. Beleave it or not ... but as said, with FW v2.14.1 my ones have up to 15 °C deviation when set at 360 °C. That shows, your correction table does not work in every case.

However, as the response curve is not linear, it would need to be a calibration at 50-400 in ~25C increments

You eliminate the ADC offset by the room temp reading, that is important and good in the first step. But in my opinion it should be possible to reduce gain deviation also, if you would take into account at least one real hight temp reference reading as additional correction to your fundamentally response curve LUT.

If we were to add a second temperature calibration point; how should that adjust the LUT?

I could think of 2 solution:

a) Take the user entered correction value made at 400 °C. That is correction point "B". Compare that with your 400 °C "response LUT" value. The correction wightening at this point has 100 % when it comes to the final correction calculation. Point "A" is a low temperature point, say 100 °C (referencing at your response LUT) and has 0 % wightening (presuming there still is no significant deviation compared with room temp reading). Then calculate a new LUT from point "A" (0 % correction) up to "B" (100 % correction). That means, you will get a new LUT, referencing to your "response LUT", but corrected by an increased %-amount referenced with the 400 °C deviation. Save the new temperature in a new LUT.

b) Same as a), but for point "A" provide a 2nd user measure option at 250 °C (or 300 °C) instead of absolute referencing to your "response LUT". Treat point "A" and "B" as 100 % correction when calculating the correction LUT (again referenced to your response LUT, if desired). How you handle temperatures below 250 °C is up to you.

I just wonder how accurate FG-100 measurements will be, if you put the tip on the sensor when the set temperature is below the melting point of the used solder.

Anyway - many thanks for your good work and sharing your Firmware with us !!! Much appreciated!

KR, Harry

Ralim commented 3 years ago

Hi Ben:

When you were measuig with the fg100, were you testing 5 C increments from 100-400C?

Hmmm ... do you have any solder tin with a melting point below 200 °C for electronic work?

No, but that is the range that is being supported at the moment as some people use these for things other than soldering 😂

No doubt, that will work for your irons / tips, but you can not assume, that it will work for all the other gear out there. Beleave it or not ... but as said, with FW v2.14.1 my ones have up to 15 °C deviation when set at 360 °C. That shows, your correction table does not work in every case.

Well I mean I did spot check on over 20 tips without any issues at all maintaining a respectably close result. So yes; its not a conclusive study, but its also not a "works for me" single test. There are others out there too that have had a much closer experience with the new method than the old even with gain adjustment.

However, as the response curve is not linear, it would need to be a calibration at 50-400 in ~25C increments

You eliminate the ADC offset by the room temp reading, that is important and good in the first step. But in my opinion it should be possible to reduce gain deviation also, if you would take into account at least one real hight temp reference reading as additional correction to your fundamentally response curve LUT.

Yes I agree with you, and a PR is most welcome.

I just wonder how accurate FG-100 measurements will be, if you put the tip on the sensor when the set temperature is below the melting point of the used solder.

For my main test ones I fused a K type thermocouple to the end of the tip, and only use the FG-100 for spot checks like its intended.

Anyway - many thanks for your good work and sharing your Firmware with us !!! Much appreciated!

KR, Harry

Always welcome :)