FluxML / GeometricFlux.jl

Geometric Deep Learning for Flux
https://fluxml.ai/GeometricFlux.jl/stable/
MIT License
348 stars 30 forks source link

examples/gcn.jl doesn't work #245

Closed jarbus closed 2 years ago

jarbus commented 2 years ago

I get the following error when running on CPU (and similar error when on GPU) when running examples/gcn.jl.

ERROR: LoadError: MethodError: no method matching GCNConv(::Matrix{Float32}, ::Pair{Int64, Int64}, ::typeof(relu))
Closest candidates are:
  GCNConv(::A, ::B, ::F, ::S) where {A<:(AbstractMatrix{T} where T), B, F, S<:AbstractFeaturedGraph} at /home/jack/.julia/packages/GeometricFlux/ErNzP/src/layers/conv.jl:20
  GCNConv(::AbstractFeaturedGraph, ::Pair{Int64, Int64}, ::Any; init, bias) at /home/jack/.julia/packages/GeometricFlux/ErNzP/src/layers/conv.jl:26
  GCNConv(::Pair{Int64, Int64}, ::Any; kwargs...) at /home/jack/.julia/packages/GeometricFlux/ErNzP/src/layers/conv.jl:34
jarbus commented 2 years ago

@yuehhua The fix here seems to be replacing lines like adj_mat = Matrix{Float32}(adjacency_matrix(g)) |> gpu with lines like adj_mat = FeaturedGraph(adjacency_matrix(g)) |> gpu, does this seem right? I can submit a PR with a fix if so.

yuehhua commented 2 years ago

I think

fg = FeaturedGraph(g) |> gpu

should be fine.

ndgnuh commented 2 years ago

I think

fg = FeaturedGraph(g) |> gpu

should be fine.

After using this, I got

LoadError: MethodError: no method matching zero(::Type{Any})

Top level stack trace points to

 [41] macro expansion
    @ ~/.cache/julia/packages/Flux/qp1gc/src/optimise/train.jl:136 [inlined] 
 [42] top-level scope
    @ ~/.cache/julia/packages/Juno/n6wyj/src/progress.jl:134

I don't even use Juno.

zqni commented 2 years ago

I think

fg = FeaturedGraph(g) |> gpu

should be fine.

After using this, I got

LoadError: MethodError: no method matching zero(::Type{Any})

I believe this error is related to the Zygote which might result from the wrong setting somewhere and I could run the code in CPU perfectly.

yuehhua commented 2 years ago

I have updated the master branch and it works with following:

fg = FeaturedGraph(g)  # pass to gpu together in model layers

## Model
model = Chain(GCNConv(fg, num_features=>hidden, relu),
              Dropout(0.5),
              GCNConv(fg, hidden=>target_catg),
              ) |> gpu;

But there is a bit problem. The loss doesn't drop but increase. I'm still fixing this.