NeuralEnsemble / python-neo

Neo is a package for representing electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats
http://neo.readthedocs.io/en/latest/
BSD 3-Clause "New" or "Revised" License
320 stars 248 forks source link

Problem with magnitude of currents #533

Closed Emerson207 closed 5 years ago

Emerson207 commented 6 years ago

I am using AxonIO to load .abf data into Python to further process them. When I load my data, the magnitude of the currents is altered in comparison with the original one. First, the sign of the currents is changed (I visualized in Clampfit as positive values, but they are loaded as negative currents). Additionally, the magnitude of the currents changed (I am not sure what is the exact factor, but it is around 3.05(?)). I have been studying the problem, and I know the problem it is not produced with all files. If I load the original file, everything works perfect, but when I load the same file that I had filtered and saved using Clampfit, the magnitude and sign of the current change... This happen as well with other modules as pyabf, so I think it is probably something related with the .abf itself. I send a sample of the original recording (2018_04_13_0016.abf) and the second one that is saved after modification using Clampfit (2018_04_13_0016_10f.abf, I used a 8-pole Bessel lowpass filter with a cutoff of 10 kHz). I will really appreciate any help.

Emerson. Recordings.zip

samuelgarcia commented 6 years ago

Thank for the report.

I have also opened an issue here: https://github.com/swharden/pyABF/issues/5

@swharden (the pyabf dev) appear to be a real expert in ABF format. I am not at all. I am even not a user of abf files!! So we can join effort to solve this.

recently @zixuanweeei also report gain/offset problem: https://github.com/NeuralEnsemble/python-neo/issues/491

If I correctly anderstand when a user do an inplace modification with Clampfit in an abf file then the file have wrong gain/ofsset so that the magnitude is totally wrong. This suggest that theses gain/ofsset are internally coded in another way.

samuelgarcia commented 6 years ago

@swharden: here a probable explanation: in the zip the file "2018_04_13_0016.abf" is the original one and is coded in int16 and the filetered one "2018_04_13_0016_10f.abf" is coded with float32. For the second one gain and offset still be the same. So 2 possibilities:

Do you have an idea ?

samuelgarcia commented 6 years ago

@Emerson207 : could you confirm that with neo 0.5.2 (before the new neo.rawio refactoring) the magnitude used to be good even with modified abf by clampfit. My guess is that with float32 internal gains/offset must not used as it was the case in older version of neo. @swharden: could you confirm this ?

Emerson207 commented 6 years ago

I used to have an older version of neo. I am not sure what version. Then, I had to format my computer, and I then installed the last version. I had installed older versions, but I have always the same problem. I am really sorry for the little info I have about it...

2018-05-14 12:03 GMT-04:00 Garcia Samuel notifications@github.com:

@Emerson207 https://github.com/Emerson207 : could you confirm that with neo 0.5.2 (before the new neo.rawio refactoring) the magnitude used to be good even with modified abf by clampfit. My guess is that with float32 internal gains/offset must not used as it was the case in older version of neo. @swharden https://github.com/swharden: could you confirm this ?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NeuralEnsemble/python-neo/issues/533#issuecomment-388870927, or mute the thread https://github.com/notifications/unsubscribe-auth/Alc-gRqa9EOebTe_tFrJu76_Eu6Dkcw8ks5tyarHgaJpZM4T-Axd .

-- Emerson Carmona Rojas Bioquímico Doctorado en Ciencias mención Biofísica y Biología Computacional Centro Interdisplinario de Neurociencias de Valparaíso Universidad de Valparaíso Teléfono: (32)2508190

Emerson207 commented 6 years ago

I am going to install the 0.5.2 version, but I am pretty sure I did it before...

2018-05-14 21:26 GMT-04:00 Emerson Carmona Rojas emersoncr207@gmail.com:

I used to have an older version of neo. I am not sure what version. Then, I had to format my computer, and I then installed the last version. I had installed older versions, but I have always the same problem. I am really sorry for the little info I have about it...

2018-05-14 12:03 GMT-04:00 Garcia Samuel notifications@github.com:

@Emerson207 https://github.com/Emerson207 : could you confirm that with neo 0.5.2 (before the new neo.rawio refactoring) the magnitude used to be good even with modified abf by clampfit. My guess is that with float32 internal gains/offset must not used as it was the case in older version of neo. @swharden https://github.com/swharden: could you confirm this ?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NeuralEnsemble/python-neo/issues/533#issuecomment-388870927, or mute the thread https://github.com/notifications/unsubscribe-auth/Alc-gRqa9EOebTe_tFrJu76_Eu6Dkcw8ks5tyarHgaJpZM4T-Axd .

-- Emerson Carmona Rojas Bioquímico Doctorado en Ciencias mención Biofísica y Biología Computacional Centro Interdisplinario de Neurociencias de Valparaíso Universidad de Valparaíso Teléfono: (32)2508190

-- Emerson Carmona Rojas Bioquímico Doctorado en Ciencias mención Biofísica y Biología Computacional Centro Interdisplinario de Neurociencias de Valparaíso Universidad de Valparaíso Teléfono: (32)2508190

lbologna commented 6 years ago

Hello,

I am having the same problem after upgrading from 0.5.2 to 0.6.1 and it seems like the patch referred to in #491 is not available anymore.

I am running the following code:

===========================

from neo.io import AxonIO                                                          

filename = './96711008.abf'                                                        

r = AxonIO(filename=filename)                                                      

bl = r.read_block() # read the entire file > a Block

for seg in bl.segments:        

    print(seg.analogsignals[0])  

===========================

on the file linked here: https://www.dropbox.com/s/yt4lyx49lx99r83/96711008.abf?dl=0

and the printed signal values, when printed with neo 0.6.1, are about 32 times smaller.

Is the fix for this already in place or should I perform some scale conversion?

Thank you.

samuelgarcia commented 6 years ago

I known were it come from. When internal dtype is float32 then the gain offset should not be apllied. (as it was the case in 0.5.X). I will try to make the patch very soon. Sorry for the delay on this bug.

swharden commented 6 years ago

Assessment

I found that if nDataFormat = 0, it means data is int. If it's 1, it means data is float. When the section map is read, the size of data points is listed (2 = 16-bit float, 4 = 32-bit floats). In the examples in the original post, data is 16-bit int and 32-bit float. I have ensured pyABF (v2) supports 32-bit floating point ABFs and added these the demo files listed above to the pyABF demo data section, but unfortunately the magnitude (scaling) problem remains. Slightly more information is written on the similar ticket on the pyABF project: https://github.com/swharden/pyABF/issues/5

It's interesting to note that this scaling issue is identical on both channels, suggesting that the results are not in the header sections which contain values broken-down by DAC or ADC. Basically I couldn't seen anything obviously different in the headers that would account for this change:

Pretty good fix

I have no idea why this works, but just multiplying each sweep by -1/3 gives you the data you want.

code:

import pyabf
assert pyabf.__version__.startswith("2")

# plot the first file
abf = pyabf.ABF("2018_04_13_0016a_original.abf")
abf.setSweep(sweepNumber=0, channel=1)
plt.plot(abf.sweepX, abf.sweepY, label="original ABF")

# plot the second file on top of it
abf = pyabf.ABF("2018_04_13_0016b_modified.abf")
abf.setSweep(sweepNumber=0, channel=1)
mult = -(1/3)
plt.plot(abf.sweepX, abf.sweepY * mult, label="modified ABF")

# decorate the plot and zoom to an interesting area
plt.title("2018_04_13_0016 inspection (pyABF v2)")
plt.ylabel(abf.sweepLabelY)
plt.xlabel(abf.sweepLabelX)
plt.legend()
plt.axis([.005,.015,-150,350])
plt.show()

output:

figure_1

samuelgarcia commented 6 years ago

Hi @swharden, thank you for this long anwser. i am a bit lost in this ABF. Supporting only int16 directly for DAQ is should be wise... I am a bit lost :

@lbologna : could you also test the patch and compare with the original data magnitude.

In conclusion Internal float32 in ABF is more is not reliable, I was aware of that.

swharden commented 6 years ago

@samuelgarcia

How do I know data is int when nDataFormat is 0 and float when nDataFormat is 1?

I sort of made that one up by inspecting the headers of a lot of files (those in the pyABF demo data folder). Every ABF file I have has nDataFormat=0 (including the original ABF posted by the OP), except for the Clampfit-modified file posted by the OP, so it seems like a safe enough guess for now.

We determine 16-bit vs. 32-bit data point size (whether int or float) by the second element of DataSection in the section map (corresponding to the number of bytes each data point is). I haven't personally seen float16 ABF data, so I'm not sure if that's possible or not, but it's easy enough to support just in case.

The headers of many files can be viewed by clicking their filename in the table:

The headers of the two files in question are here:

Regarding scaling, offset, and the magic "-1/3" I think it makes sense now.

I wasn't sure where the -1/3 was coming from, but your post helped me understand it. When using float32 math, we simply shouldn't scale the data (I didn't realize that before). By not scaling the data (ignoring the values of fInstrumentScaleFactor, fSignalGain, fADCProgrammableGain, fTelegraphAdditGain, fADCRange, lADCResolution, fInstrumentOffset, and fSignalOffset), the output looks perfect. Previously, these values were causing me to multiply the data by -3.05. It seems your suggestion is right on - when using float data, don't scale it.

samuelgarcia commented 6 years ago

OK thank you for this. So my easy patch make sens.

apdavison commented 5 years ago

Seems to be fixed in #543. Please re-open if you still have problems.

samuelgarcia commented 5 years ago

close by #543