Closed hartytp closed 5 years ago
When doing the last one, do it with PoE, both TEC drivers at full current and use the opportunity to take a couple thermal camera photos. That would yield a lot of info on #38 .
When doing the last one, do it with PoE, both TEC drivers at full current
How much power can the PoE supply in v1.0?
Otherwise, sure, if I can track down a thermal camera I'll do that.
Full beans. 25.5W to the device. But even with external 12V it would be useful if you don't have 803.2at.
Okay, I'm not up to speed on this, but I though that required the AT detect line doesn't it? Or does that just tell the CPU what power is available?
Okay, I'll see how much juice my PoE switch has and run this as hard as possible from both sources.
The AT line is just for letting the CPU know that it is allowed to draw that much.
@jordens doesn't this imply that the firmware usually needs to do something to enable >15W of power from the switch?
Where does it say that? Classification is negotiated between PSE and PD (the module) and then signaled to the CPU via AT. There is nothing the CPU can do.
In the docs I pasted above, the module states that the controller needs to negotiate Type 2 PD over the data link layer. As I understood it the "controller" here is the microprocessor, not the PoE module. The PoE module doesn't touch the data layer, right, since that goes on separate wires to the phy. Anyway, it's also very possible that I'm misunderstanding the basic terminology here...
Ah. That's LLDP over Ethernet for data layer classification (DLC) as opposed to physical layer classification only which we are doing. I am a bit unsure about the standard and when DLC for type 2 is a must. The datasheet calls it a "should". The switches I have (but no Cisco) don't need DLC.
Okay, well may or may not be an issue for the switches people want to use. But worth bearing in mind.
Yes. If this turns out to be a problem or desirable feature, then implementing LLDP DLC doesn't look to complicated.
Taking prelim data (will post in morning) but RMS noise is about 2LSB with PoE, not bad!
Assume thermistor is dR/dT = 4.4e-2/K
. Thermistor voltage is approx V_th=Vr/2
, 1LSB = Vr/2^24 = Vth/2^23=5nK
rms noise is about 15nK, pk to pk is 0.1uK
Nice! Impressive it works so well with PoE!
That's really good. What's the limit here? If I am not wrong, this meets the datasheet (16 S/s with the 50 Hz filters: ~0.5µVrms ~2LSBrms).
Yes @jordens my understanding is that this measurement is roughly consistent with the noise floor of the ADC with the filters we've chosen and the buffers enabled. Not going to gaurantee that's still going to be with case with the TEC drivers at full whack however.
What's interesting is that upping the output data rate doesn't really affect the noise floor, only the 50Hz rejection. even then, in most cases the noise floor could be quite a bit higher without issue, so there is room for users to trade off loop bw for noise rejection if they desire...
Exactly. 16 S/s and a couple Hz loop bandwidth would be slow for small and tightly coupled thermal loads. And 0.1µK would be too good. ;)
I've logged the ADC reading for a longer duration. The measurement was done in an office environment with a lot of electronics running in the surrounding and no temperature stabilization.
The board was powered using PoE from the ProSAFE JGS516PE and the programmer was connected to a computer. The thermistor was replaced with a 10k reference resistor
The room was vacated from about 6000 s to 11500 s. During this period the ambient temperature was significantly increased due to poor ventilation. I'd estimate the temperature increased by over 5 K.
All in all this looks promising.
What's np.diff(data).std()/2**.5
for that (cheap trick for suppressing the low-f instability of normally distributed noise)?
What's
np.diff(data).std()/2**.5
for that (cheap trick for suppressing the low-f instability of normally distributed noise)?
in units of LSB 2.51
Would the on-board temperature sensor make sense to compensate for ADC temperature coefficient?
Here is a 10 minute run using the power jack and with the programmer disconnected.
This gives np.diff(data).std()/2**.5 = 2.55
The same as before, but using a 3 m twisted pair to connect to the 10k reference resistor.
Powered via jack and programmer disconnected. (np.diff(data).std()/2**.5 = 2.57)
Powered via PoE and programmer connected. (np.diff(data).std()/2**.5 = 2.53)
Would the on-board temperature sensor make sense to compensate for ADC temperature coefficient?
Would the on-board temperature sensor make sense to compensate for ADC temperature coefficient?
The ADC has a temperature sensor inside it that can be read out (at the expense of a 30% reduction in data rate). It would then be trivial to compensate the temperature coefficient in software, although it would probably have to individually measured for each board.
Based on the expected temperature coefficient for the circuit, and from the data posted above, my expectation is that the temp co will be low enough that users won't want to bother compensating it.
The caveat will be that if the TEC drivers make the board temperature vary strongly with the TEC current due to poor thermal management, then this might become an issue. But, hopefully with decent thermal management we can make this a non-issue.
Exactly. 16 S/s and a couple Hz loop bandwidth would be slow for small and tightly coupled thermal loads. And 0.1µK would be too good. ;)
Indeed. Obviously, if we want an easy factor of 2 in bw, we could switch to using 2 single ADCs instead of a dual. IME it's not worth it, since the control loops one wants are generally quite slow, but I'm happy to be corrected if that's not true for other people's use cases.
The same as before, but using a 3 m twisted pair to connect to the 10k reference resistor.
NB this cable was completely unscreened.
That's great! This is one of the things that really worried me. Using COTS temperature controllers (including some quite expensive ones sold by laser companies whose electronics design skills I generally hold in high regard) I've generally found really poor results with long unscreened cables. Sometimes instability over a few minutes of 100mK or moor, presumably due to rectified RF/50Hz/whatever. It's great to see this working so well in a noisy environment with shoddy cabling!
NB we were sloppy with our maths above and got the temperature conversion wrong. it's approx 5uK per LSB, not 5nK / LSB ... oops...
typical 10k thermistor:
- R_0 = 10k [Ohm]
- alpha = 4.4e-2 [K^-1]
- dR_th/dT = R_0*alpha [Ohm/K]
AFE:
- R = 5k
- V_th = V_r*R_th/(R_th+2*R)
- dV_th/dR_th = V_r*2R/(R_th+2R)^2
- R_th ~ R0 = 2R => dV_th/dR_th = V_r/(4*R0)
- dV_th/dT = dV_th/dR_th * dR_th/dT = V_r/(4*R0)*R0*alpha = V_r/4 * alpha
- ADC_mu = V_th/V_r*2^24
- dADC_mu/dT = dADC_mu/dV_th *d_Vth/dT = 2^22*alpha [LSB/K^-1]
- dT/dADC_mu = 1/(2^22*alpha) = 5.5uK / LSB
So the RMS noises we're looking at are more like 2.5LSB=14uK. So, still solidly respectable.
I've done some more measurements with different post-filter settings:
Using the fastest sampling of 27 SPS I measure np.diff(data).std()/2**.5
= 3.1 LSB.
The slowest sampling 0f 16.67 SPS gives np.diff(data).std()/2**.5
= 2.5 LSB.
Removing the post-filter and sampling at a nominal 31,250 SPS I found np.diff(data).std()/2**.5
= 259 LSB.
Edit: Setup Looking on both channels with a 10k precision resistor. One on a 2 m cable (twisted pair). No significant differences were observed. TEC disconnected Barrel connector powered.
Nb I'd take the no-post-filter data with a pinch of salt. The ethernet data-stream experienced corruption at high data rates. I believe I've rejected all corrupt data.
Thanks @pathfinder49 . Remind me (and yourself when you look back at this later on) what the setup was here? I assume something like:
How did the data look? By eye, did it look like white noise (did you FFT it quickly?) or did it look like it had structure? One thing I'm interested in is whether we see sudden spikes/glitches in the data due to EMI when less aggressive filtering is used.
Generally, how does this data compare with your expectations?
NB As before, conversion is approx 5.5uK/LSB
Answering one of my own questions...for sine1+sine5 filter, no post filter, the datasheet says
For a 3V3 reference, 260LSB RMS is 51uV RMS. That's a lot higher than the 9.5uV in the spec sheet, so seems a little suspicious. Would be interesting to see a noise spectrum, since I can imagine this is dominated by narrow-band sources (most likely 50Hz harmonics or SMPS noise).
For the data with post filter, 3.1LSB is 0.6uV RMS which is pretty close to the datasheet value
So, it looks like there may be some scope for upping the sample rate, although the optimal is likely to be strongly dependent on the application.
Anyway, nice work.
Oh, the other question I had was how long you recorded the data for. i.e. did you take 30min or so of data so you'd have a decent run at capturing somewhat infrequent noise spurs (the kind that wreak havoc with the COTS temperature controllers we have in the lab)
Remind me (and yourself when you look back at this later on) what the setup was here?
I'll edit my post.
Oh, the other question I had was how long you recorded the data for. i.e. did you take 30min or so of data so you'd have a decent run at capturing somewhat infrequent noise spurs
This data was over 2 minutes with post filter and 10 s without.
Would you like additional long term data to that produced in sept?
Would you like additional long term data to that produced in sept?
Not long term data. I'd like two things:
Apologies for the delay, here are some quick plots of the unfiltered data. The actual data rate was significantly less than the nominal 31,250 SPS from the filter settings. I haven't checked how the firmware handles time-stamping. So I'd take the frequency axis with a pinch of salt. The long cable was on ch0.
Raw data on one channel
FFT
Zoomed on lower frequency range
Thanks @pathfinder49 ! If the current firmware can't keep up with the ADC then I don't think the spectra are particularly meaningful, so I wouldn't read too much into the data. As I said, my guess is that with the wider BW and no screening around the board, we're dominated by pickup at various frequencies...
At a high level: clearly there is going to be an application-specific trade off on filter settings (more bw gives potentially better rejection of external thermal noise but worse rejection of electrical noise).
I think we've got enough data on that for now to demonstrate that the hardware doesn't seem to have any major issues. I suggest we leave this there for now and move onto the TEC.
Here is a plot showing the adc response to step changes in the TEC 0 driver. Logging at 27SPS
The TEC current was stepped by setting the I-Set voltage as follows: 1.65 V (0.30 A) -> 2.97 V (2.94 A) -> 1.65 V -> 2.64 V (2.28 A) -> 1.65 V
TEC1 was left at 1.65 V
The ADC setup is the same as in previous posts and the board is powered via the 12 V jack.
Nice job @pathfinder49 . Let's see.
1LSB=5.5uk
dV_tec = 1.3V
dI_tec/dV_tec = 6A/3V=2A/V
55LSB=0.3mK
change in the set-point. i.e. 110uK/A cross-talk between the set-point and the TEC current25LSB=0.14mK
change in the set-point. i.e. 70uK/A cross-talk between the set-point and the TEC currentWhat's interesting is that the difference in the size of the two current steps is pretty small -- and the system doesn't exhibit any obvious hysteresis -- but there is a big difference in the size of the ADC offset change. So, there is something quite non-linear going on here. Maybe an issue with the supply voltage?
either way, it suggests that for lower TEC powers the short-term (i.e. not worrying about board heating up yet) stability effects aren't huge.
Also, good to see that by eye there is no change in the short-term noise levels with the TEC on / off, which is also great!
For completeness, I guess your set-up is:
Is that all correct? And, do you agree with my analysis above?
Same data with the second channel shown
It can be caused by groundind loop. In the next revision I used 4 layer board where grounding is much better.
@hartytp and I just measured the Voltage rails when stepping the TEC and could see no significant changes (1mV resolution DMM) on 5V5, 3V3, 3V3A or the resistor voltage drop.
I've now repleaced the long leaded resistor on ch0 with one directly connected to the terminal block. The setups for ADC 0 & 1 are now identical.
The plot shows the following TEC steps: [Note: current in A = 2*(voltage -1.5 V)]
@pathfinder49 so, I think the remaining tests in rough order of priorities are:
If we can get all of that done, I'd say we've tested this design very well!
I've added the capacitors you suggested (100 nF) on both adc channels. Performing the same measurement the behavior is broadly speaking the same.
I've connected the ADC REF- to the GND side of R50.
This removes the stepping better than one LSB.
On a separate note the ADC pin 1 appears to not be connected to ground on my board. This disagrees with the schematics.
Awesome, so no short-term glitches evident at all when we step the TEC current -- just slow thermal effects. Very good!
A quick attempt at getting noise characteristics at different TEC drive powers. The settling time is quite long, I'll follow up with a longer settling time.
Setup:
TEC settings (ch 0 & 1 are both kept at the same current) initially: 0.6 A 600 - 1200 s: -0.4 A (each) 1200 - 1800 s: -1.4 A (each) then 0.6 A (each)
Nice. So, am I right in saying that:
If so, I think we're done with noise measurements. Just need to look at the temp co and board temperature and we're finished.
Tests I propose to do:
I think that will be enough to verify the basic functionality before finalizing the v2.0 design, but let me know if anyone thinks there is anything else we should do.