Open hattesen opened 5 years ago
Hi, You are more than welcome to add to the current controller by @dhiltonp features that you think will improve this. Note that for the ts80 build there is only around 512 bytes of flash free for any modifications. (And yes, any changes have to fit and work on both models)
USB will not work for many reasons that have been outline at least 4-5 times in github issues elsewhere on this repository. You are welcome to bitbang this data out another port, but getting access to these is hard. My recommendation is to use something like JLink scope to record live ram contents of all of these values for logging of the data.
I think it would be far better to have the current system more aware of tiptype (since that is known) and have it load known thermal masses for each of these instead. and just tune each tip a bit better than implement all of this in the code space constrained application.
That said, if you get this working better than the current one for all tip types (Ts100 official + Hakko + ts80) then I'm happy to support it.
@Ralim, I was unaware, that the tiptype was "known" to the firmware. In that case, it should obviously be detected, and the PID values adjusted accordingly. How is the current tip-type detected?
I think it would be far better to have the current system more aware of tiptype (since that is known) and have it load known thermal masses for each of these instead. and just tune each tip a bit better than implement all of this in the code space constrained application.
I agree, that there is opportunity for a quick gain, adjusting the P, I, D constants, based on tip-type as well as supply voltage (heating element power), since they both may affect the optimal constants. With a variable power, the heating rate is affected, which will make the D(erivative) constant more critical.
I suggest we log/record the behavior (heating rate, temperature overshoot and -duration), for each tip type, as described above, along with the measurements of the mass of the tip and the supply voltage, and document that information. That way, we could establish the most optimal P, I, D values for each tip type, either as individul constants, or possibly computed P, I, D constants, using regression analysis, based on the tip weight. If a similar approach was taken for supply voltage (heating element power), a built in auto-tuning function would no longer be required, as the P, I, D constants could be optimized on-the-fly, based on the current heating element voltage (power) and tip-type (mass).
This approach essentially places the auto tuning algorithm outside the ambedded firmware which reduces the storage (Flash) impact to a minimum, as only the PID constants (tip_type -> (P, I, D)
or a transfer function (tip_type_mass, supply_voltage) -> (P, I, D)
would need to be stored.
An alternative to using the "known tip-type" to derive the thermal mass, would be to determine the thermal mass based on the temperature rise rate while the iron is heating up (well before the PWM dips below 100% duty cycle). That way, ANY tip could be accommodated (within a reasonable thermal mass range).
It would, however, still provide a stepping stone towards a built-in auto-tuning function, although it may be a very tight fit in 512 bytes. One way to obtain additional free Flash might be to use an alternative PID algorithm implementation – although I have yet to look at the current implementation.
Q1: Should I create a separate issue for the "pre tuning" algorithm, to obtain optimized P, I, D constants, based on tip-type and supply voltage, and cross link this issue?
Q2: We would still need to establish a "low impact" way of logging/recording/capturing time->temperature (and possibly more). Any additional suggestions+details would be appreciated. I am personally leaning towards...
The biggest temperature control issue right now is a non-linearity due to thermocouple placement (see #360).
Specifically, there are more than 5 seconds of thermal lag between an external measurement at the tip, and the internal thermocouple temp.
The tip temp can be 100C below the thermocouple reading when heating up. More importantly, that difference can occur when the iron is in use - this is why eevblog Dave had issues with soldering a large thermal mass.
I've been looking at a feed-forward aspect - basically using power and temp history to detect that the tip is active and so is reading low (thinking about it as calculating 'real' temperature given that history).
That said, I'm on the road for the next 1-2 months. I'm really interested in what you come up with! I'll be available for discussion, but can't really test stuff.
The biggest temperature control issue right now is a non-linearity due to thermocouple placement (see #360). Specifically, there are more than 5 seconds of thermal lag between an external measurement at the tip, and the internal thermocouple temp. The tip temp can be 100C below the thermocouple reading when heating up. More importantly, that difference can occur when the iron is in use - this is why eevblog Dave had issues with soldering a large thermal mass. I've been looking at a feed-forward aspect - basically using power and temp history to detect that the tip is active and so is reading low (thinking about it as calculating 'real' temperature given that history).
@dhiltonp, thank you on your feedback. I was unaware of the magnitude and status of that issue. I agree, that it is critical to have better control over the temperature measurement deviation (#360) before attempting to perform any further optimization, compensation or tuning which would just add to the complexity of the controller, making it more difficult to solve any basic issues.
I will concentrate on that issue, and try to contribute to #360 before proceeding with any work on other PID tuning aspects. I believe that the best way forward is to build a (simple) mathematical model of the environment, to allow simulations to be performed – more on that in #360.
Technically it is possible to measure the temperature of the heating coil itself by measuring the resistance and comparing it so the resistance of the coil at room temperature ( T ~ ln(R/R0) ).
By knowing the temperature of the coil and knowing how far off the thermocouple element is, the software could then draw useful conclusions about the actual tip temperature.
I don't know if this is helpful but wanted to throw it in here.
Edit: Ok, nevermind. I didn't know the thermocouple and the heating element are connected in series.
My question to everyone is does it really matter? Unless you are doing some very rare, hi-tech work - you would never work from a table of temperature. and I've never heard of anyone who looks at a job and says, " that's going to need 345.73c". Certainly when I buy solder I always take note of the melt point and make a mental note ... similarly when I look at a job, the little voice says ... ooo lots of plastic - be careful - or big pad that'll need a bit more oomph .... but I also know that for the last 50 or so years I've pretty much always used a lead/tin alloy 60/40 or 63/37 that needs about 320c to make a decent joint on through hole. Its actually only for the last two years I've had a temp controlled tip - and the TS80 or 100 are IMHO both sacks of the brown and smelly - both are examples of tech for tech's sake that actually gets in the way of doing the job well.
I started by heating a copper slug on a stove in a field - I progressed to using an iron with an element that the manufacturer had designed not to turn my workspace into ash. I can pretty much look at the magic happening as the alloy coats the components - and from the way it flows, the shape of the margins and the look of the resin "Jewels", you either get a good feeling that the stuff has melded - or sometimes you now that it's "not right" ... maybe burned or melted but not melded. Either you chance it or do something about it - but usually when it doesn't feel good you end up revisiting quite soon after to make it right. I don't need this degree of accuracy to make a decent joint and I don't believe such precision helps anyone else achieve it either. I have worked on precision electronics where tolerances were strictly stated and adhered to. The building and kit we used cost much more than this tech toy.
I know that the TS80 I use and the 100 (which I don't) have been built to the usual "good enough" standard to drive a display to give a good performance for those who need that sort of encouragement. But I also suspect that while current and voltage are stable and consistent and movement is maybe detected by an accelerometer and not a drop of mercury and a piece of bent wire. I'd be very surprised to find that the circuitry is capable of accurate avo measurements and numeric processing sufficient to calculate and process the data to give accurate temperatures Usually the requirement is to chuck in a measured amount of juice sufficient to achieve and maintain maintain certain levels with reference to base numbers - we sort of know voltage and tip resistance - so you can guestimate a sensible starting level - then the requirement is to maintain whatever levels are set by the user to whatever tolerance is acceptable. That's usually achieved by poke and hope because that's the cheapest and most accurate way - you tension a bi metal strip to suit the user - the quality of the manufacture sets the tolerances - and its easy enough to create an approximate estimate of likely temperatures - but the acid test is whether the solder melts. this time and next time - the actual numbers are in the user's head.
I've just bought a chinesium special (xmas novelty) soldering iron made from a re-purposed vape handle... It's a great example of lateral thinking - and it's as accurate as my ts80 and my bench kit all for £12! ( https://www.ebay.co.uk/itm/Soldering-Iron-Wireless-Charging-Soldering-Iron-Mini-Battery-Soldering-Iron-DIY/223700474945?hash=item3415965c41:g:H7sAAOSwKAFdnx3O ).
Yes it's a toy or a very niche tool restricted by a tiny thermal mass which limits its range of usefullness. But because I had to set it up and have always to think about the job I want it to do - I'll go one step further and say the job it does do is actually better than the S100/80 because the workpiece isn't so obscured by smoke and mirrors. If the techies rely as heavily on the numbers to do the job as appears to be the case - it's always going to limit the quality and not enhance it.
@hattesen As explained by @Ralim, space is a crucial thing here. And since the TS80Ps space is already depleted, this issue should be closed. 😊
thanks
Overview
I propose adding a menu item, and/or button-press combination or -sequence, to automatically tune (optimize) the P, I and D gain parameters of the temperature control algorithm, to match the current environment (mainly tip-type and input voltage), achieving a responsive and stable temperature control profile.
Background
The optimization of the parameters (constants) of the PID temperature controller is critical to achieve a fast reacting, stable temperature. But the optimal values for the Proportional, Integral and Derivative parmeters of the PID control algorithm are not easy to establish, and the optimal values vary, depending on many factors, including heating element power (as a result of supply voltage), target- and ambient temperature and the thermal mass of the soldering tip.
Being able to optimize the values of the P, I and D parameters, for the current operating conditions would therefore be a vast improvement to the users of the TS100.
Current Behavior
Currently, using a fixed set of PID parameters, the accuracy, stability and reactiveness of the temperature of the soldering tip change depending on:
Depending on the above variables, the temperature control may exhibit one or more of the following symptoms:
The above symptoms can all be eliminated by adjusting the value of the P, I and D parameters (multiplication factors).
In general, though, it is preferable to have slow changing, but stable temperature, than to have temperature overshoots and oscillations, and therefore, the PID parameters must be set conservatively to avoid the latter, under all operating conditions. This typically means, that the experience for a user with "average" conditions will experience relatively slow heat-up and reaction time, before the target temperature is met.
Proposal
The addition of an
Auto Tune
menu item (or a button press combination/sequence) that will automatically optimize the values of the P, I and D parameters, to achieve a fast, yet stable temperature control for the current environment:The Auto Tune will be most useful when:
Implementation
Although tuning algorithms exist, that will monitor temperature deviation patterns and continually adjust the PID parameters to attempt to achieve optimal values, such an approach is non-trivial, and furthermore the result of this level of continual adjustments may be temperature instability and unpredictable behavior, potentially causing damage to components or the soldering iron itself.
Therefore, I propose a safer, and more common PID tuning approach, that is manually activated, and performs a single temperature profile sequence:
temp_tip < temp_ambient + (temp_target -temp_ambient)/4
.The above mentioned measurements, can be used to estimate suitable PID parameter values, which can then be set/stored. To further tune/optimize the PID parameter values, the above cycle can be repeated one or more times, but with PID control algorithm activated, and minor adjustments can be automatically applied to the P, I and D parameter values as a result of any overshoot, undershoot (oscillations) and temperature offset that is measured using the current PID parameters. This cycle can be repeated for a fixed number of time, or until a satisfactory temperature control profile is achieved. Before exiting the
Auto Tuning
function, the P parameter should be reduced slightly to prevent oscillation from happening as a result of minor changes to voltage, target- and ambient temperature.Once the tuning process has been established and field tested, it would be possible to estimate adjustments required to the P, I and D parameters, when target temperature (and ambient temperature) is changed, to reduce the need for having to perform another auto-tuning cycle.
Forward path
I suggest creating a branch to experiment with the tuning algorithm. I have worked with industrial temperature control PID controllers and tuning algorithms, as well as implemented several PID controller algorithms, and I would be happy to take an active role in the implementation of this feature.
Time/temperature logging
It would be a great help in proving the auto tuning algorithm, if we had some way of somehow logging time/temperature values (and possibly a little more. I am not too familiar with which options may be available, as I have only glanced over the source code and schematic. Would it be possible to achieve recording of timestamps (millis) and associated sensor readings and events, possibly with minor modifications to the TS-100 hardware, using one of these techniques:
Data to be made available, with associated millisecond timestamp, in order of importance:
References
TBD: Control and tuning algorithms
TBD: Typical symptoms from P, I and D being too low/high (textual and charts).
TBD: Actual measurements (time/temp charts) of TS100 operating in various environments.