Closed Cody-G closed 7 years ago
For reference, what's the recommended approach? (Docs link would be sufficient.)
This is probably the best link:
http://digital.ni.com/public.nsf/allkb/0FAD8D1DC10142FB482570DE00334AFB
It is device dependent, so I'm betting that when Imagine was first written this wasn't an issue. We can also do the rescaling manually by extracting the coefficients and applying them to the raw data, but I figured we could afford to just take the 64-bit float directly.
Oh and this confirms that we were taking the unscaled values with our C call:
http://zone.ni.com/reference/en-XX/help/370471AA-01/daqmxcfunc/daqmxreadbinaryi16/
This is a bit backwards as I've already submitted the pull request #30 for this, but just for tracking the issue:
We were reading raw 16-bit integer values from the DAQ card, which is not advised with current NI DAQ cards because this value has not been scaled to reflect calibration. For the most part this only introduced a slope error in the voltage - to - digital value conversion, so we didn't notice.