neuronsimulator / nrn

NEURON Simulator
http://nrn.readthedocs.io
Other
389 stars 115 forks source link

Fast_imem struct freeing #354

Open iomaganaris opened 4 years ago

iomaganaris commented 4 years ago

During the cmake branch development and testing we figured out that fast_imem_free() is not called in the end of the simulation to free all the memory related to the fast_imem calculation (the _nrn_Fast_Imem* fast_imem_ struct). During development we tried to call fast_imem_free() from the nrn_threads_free() function, instead of the nt->_nrn_fast_imem = NULL; in line 680 of src/nrnoc/multicore.c (https://github.com/neuronsimulator/nrn/commit/b6e3a2e3433de1ef55b79379536a1fc9c23ce3ac#diff-8ee41b6f2fcd8d468c5ac1a656b7f346R680). This led to fast_imem_free() being called every time there was a a change in the number of threads, creating issues with the reporting of the simulations from Neurodamus. More specifically, when fast_imem_free() is called from nrn_threads_free() the reports of the membrane current created by reportinglib are all filled with zeros. From the inspection on the execution workflow of the Neurodamus simulation it seems like fast_imem_alloc() is still called before the simulation, so my guess is that there is something miss configured due to the multiple times fast_imem_free() is called. Unfortunately, I wasn't able to reproduce the issue with another script apart from the Neurodamus simulation we are using for testing with the fast_imem calculation enabled. I am still investigating and I will update this issue when I make any progress.

nrnhines commented 3 years ago

@iomaganaris Is this still an issue?

iomaganaris commented 3 years ago

Hello @nrnhines ,

I think that the issue is still there. I created a simple test to try to reproduce this issue:

import sys

import numpy as np

from neuron import h, hoc

def test_simple_sim():
    h('''create soma''')
    h.load_file("stdrun.hoc")
    h.soma.L = 5.6419
    h.soma.diam = 5.6419
    h.soma.insert("hh")
    h.cvode.use_fast_imem(1)
    h.cvode.cache_efficient(1)
    ic = h.IClamp(h.soma(0.5))
    ic.delay = 0.5
    ic.dur = 0.1
    ic.amp = 0.3

    v = h.Vector()
    v.record(h.soma(0.5)._ref_v, sec=h.soma)
    i_mem = h.Vector()
    i_mem.record(h.soma(.5)._ref_i_membrane_, sec = h.soma)
    tv = h.Vector()
    tv.record(h._ref_t, sec=h.soma)
    h.run()
    assert v[0] == -65.0

test_simple_sim()

I also added some printfs inside fast_imem_free() to see if it frees the fast_imem_ struct and its elements but I didn't see it being called at any time.

nrnhines commented 3 years ago

I don't think fast_imem_free should be called in the above situation because the soma (and all inserted mechanisms) will not be destroyed on exit from test_simple_sim(). Now if you changed h('''create soma''') to soma = h.Section(name='soma') and all the subsequent h.soma to soma, then I would expect fast_imem_free to be called

iomaganaris commented 3 years ago

Hello Michael,

Thank you for your answer. I tried testing this with your suggestion and fast_imem_free() is still not called. Same for nrn_threads_free() which I was expecting to be called in the end of the script. So maybe this is then normal behaviour?

nrnhines commented 3 years ago

I think it is a problem. Let me look into it.

nrnhines commented 3 years ago

I believe what is happening is that the last allocated static _nrn_Fast_Imem* fast_imem_; persists until cvode.use_fast_imem(0) turns it off or until something happens that causes it to be freed and allocated anew (change in number of threads or structure that changes its required size. So for your test, at the end of test_simple_sim() try calling h.cvode.use_fast_imem(0). Let me know if you consider this a resolution of this issue.

iomaganaris commented 3 years ago

I see. I think that this is reasonable then. Thanks a lot for looking into this.

pramodk commented 3 years ago

When neuron is about to terminate/finalise, do we explicitly call use_fast_imem(0)?

I think it would nice to free everything explicitly so that when we run valgrind, we don’t see any leaks.

nrnhines commented 3 years ago

Rightly or wrongly I have only concentrated on valgrind --leak-check=full --show-leak-kinds=definite ... A bare launch of nrniv and then ^D gives

==13775== LEAK SUMMARY:
==13775==    definitely lost: 55 bytes in 1 blocks
==13775==    indirectly lost: 0 bytes in 0 blocks
==13775==      possibly lost: 383,432 bytes in 2,548 blocks
==13775==    still reachable: 2,252,239 bytes in 11,036 blocks
==13775==                       of which reachable via heuristic:
==13775==                         newarray           : 96,328 bytes in 4 blocks

Wheras leaving a one compartment fast imem after deleting the model yields

==13869== LEAK SUMMARY:
==13869==    definitely lost: 55 bytes in 1 blocks
==13869==    indirectly lost: 0 bytes in 0 blocks
==13869==      possibly lost: 383,432 bytes in 2,548 blocks
==13869==    still reachable: 2,782,604 bytes in 16,607 blocks
==13869==                       of which reachable via heuristic:
==13869==                         newarray           : 96,328 bytes in 4 blocks

So the fast imem arrays are reachable on exit. I've never had very good luck in reducing the "possibly lost" number.

nrnhines commented 3 years ago

@pramodk What is your opinion. Close? Remove as 8.0a issue?

pramodk commented 3 years ago

We can remove this from 8.0a. We can detailed look at valgrind report at later stage.

alexsavulescu commented 1 year ago

we should re-check this