mabuchilab / Instrumental

Python-based instrumentation library from the Mabuchi Lab.
http://instrumental-lib.readthedocs.org/
GNU General Public License v3.0
120 stars 80 forks source link

Seems to have memory leak for continuous read/write #28

Closed ericreichwein closed 7 years ago

ericreichwein commented 7 years ago
while true:
    daq = NIDAQ("Dev1")
    task = Task(daq.ao0,daq.ai1)
    task.set_timing(duration='10s', fsamp='100Hz')
    read_data = task.run(write_data)
    task.unreserve()

Here is working example of my code. The first 20 or so loops memory usage is not increasing, then it increases quite fast and kills program.

natezb commented 7 years ago

Coincidentally, I happen to be working on the daq.ni driver at the moment, and just added some methods that should be relevant. In particular, I added a method for clearing Tasks, which calls DAQmxClearTask on the underlying tasks, which should clean up the memory.

You can call clear() directly (instead of unreserve() in your example), or via the context manager I just added:

daq = NIDAQ('Dev1')
while True:
    with Task(daq.ao0, daq.ai1) as task:
        task.set_timing(duration='10s', fsamp='100Hz')
        read_data = task.run(write_data)

I also added a call to clear() in Task's __del__() method, so this cleanup should happen automatically, though it may be better to explicitly clear the task or use the context manager instead.

Give it a try and let me know if it helps.

ericreichwein commented 7 years ago

Thanks for quick response. So I think there are some bugs: line 640: def _write_AO_channels(self, data): -> def _write_AO_channels(self, data, autostart):

If i use task.clear() or task.del() after task.run(write_data) then the next time in the loop I get: DAQError: (-200088) b'Task specified is invalid or does not exist.' Regardless, if I recreate the task or not. In fact if I print(task) before and after clear() it still persists as a Task object. I fixed this by adding del(task) after clear(), and that seemed to work at the same level as as just using unreserve()... however I need more time to test and quantify that claim.

natezb commented 7 years ago

Nice catch on the missing keyword argument, I should've tested that code before pushing.

Now, I'm not sure I understand the second half of your comment, do you have a working example you could paste that causes the problem? From the sound of it, I think you're probably using Tasks incorrectly. Once you've cleared a task, it cannot be reused--if you try to reuse it, you'll get the DAQError: (-200088) that you noted. However, you can create a new Task and use that (I just tested this and it works for me).

Depending on what your goal is, you may be able to reuse a single Task. In that case, you'd never clear() it, you can simply keep calling run() (and set_timing(), if needed).

It is expected that the Task object lives after being cleared--the underlying DAQmx memory gets cleaned up, but this doesn't affect the Python object. The Python object will get cleaned up later by the reference counter. Calling del on an object in Python is fairly rare, and is unneeded here.

ericreichwein commented 7 years ago

Yep, you are right. I was using tasks incorrectly. I was chasing down a memory leak that was caused by updating graphs in qt gui with multiple threads. So now I am using clear() after the read/write routine and now I dont have memory leak issues after each time the routine run.

while condition:
    daq = NIDAQ("Dev1")
    task = Task(daq.ao0,daq.ai1)
    task.set_timing(duration='10s', fsamp='100Hz')
    read_data = task.run(write_data)
    task.unreserve() #for threading issues
task.clear()
natezb commented 7 years ago

Ok, I'll mark this as solved then.