Open russelljjarvis opened 2 years ago
I'll take a look at this sometime this week. Did you use a released version of the package or the master branch? This repo is mid-refactor so I haven't done the due diligence of making sure every component is working.
Hi @darsnack, sorry I didn't see your reply straight away. I used installed via the git-master branch. Should I try the bundled package instead?
I also noticed that a method signature for lif
neuron model that my example looks like this:
function lif!(t::CuVector{<:Real}, I::CuVector{<:Real}, V::CuVector{<:Real}; vrest::CuVector{<:Real}, R::CuVector{<:Real}, tau::CuVector{<:Real})
https://github.com/darsnack/SpikingNN.jl/blob/master/src/models/lif.jl#L100
In my example I have not actually supplied I
and V
as CuVectors. Perhaps if I did that, it would help. I am not sure about all the other parameters however. It would be nice if they were all converted automatically and perhaps that is the intention.
https://github.com/russelljjarvis/SpikingNN.jl/blob/master/examples/population-gpu-test.jl#L12
Also I should have clarified that CUDA.functional()==True
on the machine I am using.
Since using master I noticed a branch called refactor/gpu-support and another branch called benchmarking. I also noticed the benchmarking branch has the classic Brunel model, which is cool.
Its a great repository by the way, it potentially has a good balance of features and examples. I am also using WaspNet.jl and SpikingNeuralNetworks.jl. I can't yet figure out which is the best Spiking Neural Networks Package. I am busy optimizing SNNs with genetic algorithms in julia using the ISI spike distance of the raster-plots and I might end up making example optimizations that involve all or any of three simulators.
Cool, glad to see someone trying this code out in a different use case than mine.
The branch that I'm currently using for my research is kd/refactor
. Unfortunately, it isn't cleaned up, and I have un-pushed commits. I've lost track of this repo as I've been pulled away to non-SNN stuff. Your timing is pretty good though, since I'm resuming my SNN project. I plan on cleaning up this repo this week. I'll ping this issue when I have it done.
Srm0-test.jl runs with GPU if you make the following modifications:
The start of src/gpu.jl needs to import from the CUDA module (compatible with CUDA11 driver) not CuArrays. The CUDA module has a constructor CuArray
which presumably functions the same as CuArrays.x(x)
First:
using CUDA
CUDA.allowscalar(false)
cpu(x) = x
gpu(x) = x
cpu(x::CuArray) = adapt(Array, x)
gpu(x::Array) = CuArray(x)
Next, in the file tests/Srm0-test.jl wrap every SpikingNN element with the gpu
function as appropriate.
using SpikingNN
using Plots
# SRM0 params
η₀ = 5.0
τᵣ = 1.0
vth = 0.5
# Input spike train params
rate = 0.01
T = 15
∂t = 0.01
n = convert(Int, ceil(T / ∂t))
srm = gpu(Neuron(QueuedSynapse(Synapse.Alpha()), SRM0(η₀, τᵣ), Threshold.Ideal(vth)))
input = gpu(ConstantRate(rate))
spikes = excite!(srm, input, n)
# callback to record voltages
voltages = gpu(Float64[])
record = function ()
push!(voltages, getvoltage(srm))
end
# simulate
@time output = simulate!(srm, n; dt = ∂t, cb = record, dense = true)
# plot raster plot
raster_plot = rasterplot(∂t .* spikes, ∂t .* output, label = ["Input", "Output"], xlabel = "Time (sec)",
title = "Raster Plot (\\alpha response)")
xlims!(0, T)
# plot dense voltage recording
plot(∂t .* collect(1:n), voltages,
title = "SRM Membrane Potential with Varying Presynaptic Responses", xlabel = "Time (sec)", ylabel = "Potential (V)", label = "\\alpha response")
# resimulate using presynaptic response
voltages = gpu(Float64[])
srm = gpu(Neuron(QueuedSynapse(Synapse.EPSP(ϵ₀ = 2, τm = 0.5, τs = 1)), SRM0(η₀, τᵣ), Threshold.Ideal(vth)))
excite!(srm, spikes)
@time simulate!(srm, n; dt = ∂t, cb = record, dense = true)
# plot voltages with response function
voltage_plot = plot!(∂t .* collect(1:n), voltages, label = "EPSP response")
xlims!(0, T)
plot(raster_plot, voltage_plot, layout = grid(2, 1))
xticks!(0:T)
This code executes using Currays. Note no networks are simulated here. Network simulation breaks due to an array broadcasting error that I don't understand.
Hi there,
To test if I could simulate spiking neural networks with GPU I modified the population-test.jl file (now called population-gpu-test.jl)
To make the example a bit less trivial I made the neuron population 100 by creating a square neuron weight matrix as such:
https://github.com/russelljjarvis/SpikingNN.jl/blob/master/examples/population-gpu-test.jl#L12
I also tried to make the neuron activity a bit more balanced by distributing the inputs so only 1/3 of inputs are strong.
https://github.com/russelljjarvis/SpikingNN.jl/blob/master/examples/population-gpu-test.jl#L35-L37
All of these modifications work if I use:
on line 16 but they break if I use
instead.
See the stack trace below.