SpikingNetwork / TrainSpikingNet.jl

train a spiking recurrent neural network
BSD 3-Clause "New" or "Revised" License
14 stars 4 forks source link

New PR for potjans connectome. #7

Open russelljjarvis opened 1 year ago

russelljjarvis commented 1 year ago

Break fixing change. The file src/contrib/genPlasticWeights-erdos-renyi-potjans.jl

Doesn't look substantially different from default genPlasticWeights-erdos-renyi.jl but it has the a necessary change in its indexs:

    exc_selected = sort(exc_ordered[1:end])
    inh_selected = sort(inh_ordered[1:end])

The file genPotjansConnectivity.jl does away with SparseArrays now possibly making it more compatible with GPU code.

src/contrib/genPotjansConnectivity.jl

I typed most or all unresolved types of: Vectors, and dictionaries and I got a speed up.

I put the invocation to this new use of the plugin connectomes in the test directory.

I put the plugin files in a contrib directory.

Besides changing plugin paths in test/Potjansdata/init.jl

Init.jl needs to calculate cell size, to pre-allocate sizes that need to be known in other init.jl contexts, for the basic default code to work.

scale =1.0/30.0
function get_Ncell(scale=1.0::Float64)
    ccu = Dict{String, Int32}("23E"=>20683,
            "4E"=>21915, 
            "5E"=>4850, 
            "6E"=>14395, 
            "6I"=>2948, 
            "23I"=>5834,
            "5I"=>1065,
            "4I"=>5479)
    ccu = Dict{String, Int32}((k,ceil(Int64,v*scale)) for (k,v) in pairs(ccu))
    Ncells = Int32(sum([i for i in values(ccu)])+1)
    Ne = Int32(sum([ccu["23E"],ccu["4E"],ccu["5E"],ccu["6E"]]))
    Ni = Int32(Ncells - Ne)
    Ncells, Ne, Ni, ccu

end
Ncells,Ne,Ni, ccu = get_Ncell(scale)   

src/cpu/loop.jl Line 240 of loop has to be this,

if post_ci!=0

for loop to tolerate connection matrices that are ragged (ie ragged arrays packed into a dense rectangle will look sparse, they will have empty, zero values).

Introducing that line is the easiest way I know to resolve the conflict.

I use a package called distributions too, to draw PRNGs that are drawn from a distribution.

I don't reverted an unintentional change to the file: src/init.jl. It still has a superficial formatting white space difference, but not a substantial difference.

I thought they where changes that you had made that got mixed in with my PR.

If that is true I could try to delete them.

Also I think this GitHub GUI has a squash merge option too.

image

If I run julia from the tests directory, include("potconntest.jl")

After some time the algorithm converges on a correlation value of 0.96 and it does testing too.

Loop no. 100, task no. 1
correlation: 0.9610312565565572
elapsed time: 2.321516990661621 sec
firing rate: 33.96662786185487 Hz
trial #97, task #1: 1.46 sec
trial #98, task #1: 1.44 sec
trial #99, task #1: 1.38 sec
trial #100, task #1: 1.43 sec
bjarthur commented 1 year ago

the ragged adjacency matrices are now vectors of vectors. to save memory for networks for which the number of presynaptic synapses is not uniform.

also, the params.jl file now needs a new variable init_code.

lastly, you shouldn't have to change the loop code. why do you make sure that post_ci!=0? that should never be the case.

russelljjarvis commented 1 year ago

"lastly, you shouldn't have to change the loop code. why do you make sure that post_ci!=0? that should never be the case." --at the time it was because presynaptic synapses where not uniform, and it was the only way to get the TrainSpikingNet.jl to evaluate post loading a non uniform presynaptic network, while I was waiting for the network patches you discuss here.

russelljjarvis commented 1 year ago

I made some changes to make the weight matrix a ragged array format.

Still working on this it is a very slow burning PR.