Open alerem18 opened 11 months ago
That's not the intended use for Flux.train!
. This function is meant to iterate over an entire epoch, not a single batch. Try writing your loop as
function train_loop(model, optimizer, train_loader, test_loader; epochs=5)
for epoch ∈ 1:epochs
iter = tqdm(train_loader)
total = 0
corrects = 0
for (X, Y) ∈ iter
grads = Flux.gradient(model) do m
predicted = m(X)
ignore() do
b_size = size(X)[end]
corrects += sum(onecold(predicted, 0:9) .== onecold(Y, 0:9)) # edit, labels is Y
total += b_size
end
logitcrossentropy(predicted, Y)
end
optimizer, model = Flux.Optimise.update!(optimizer, model, grads[1]) # edit, fixed [0]
set_postfix(iter, accuracy=corrects / total)
end
val_accuracy = accuracy(model, test_loader)
@info "Epoch $epoch/5 | Accuracy : $val_accuracy"
end
end
That's not the intended use for
Flux.train!
. This function is meant to iterate over an entire epoch, not a single batch. Try writing your loop asfunction train_loop(model, optimizer, train_loader, test_loader; epochs=5) for epoch ∈ 1:epochs iter = tqdm(train_loader) total = 0 corrects = 0 for (X, Y) ∈ iter grads = Flux.gradient(model) do m predicted = m(X) ignore() do b_size = size(features)[end] corrects += sum(onecold(predicted, 0:9) .== onecold(labels, 0:9)) total += b_size end logitcrossentropy(predicted, labels) end optimizer, model = Flux.Optimise.update!(optimizer, model, grads[0]) set_postfix(iter, accuracy=corrects / total) end val_accuracy = accuracy(model, test_loader) @info "Epoch $epoch/5 | Accuracy : $val_accuracy" end end
i did that already, same speed, even a little slower
My guess is that this NNlib's CPU implementations of Conv etc. being sub-optimal. That's the target of e.g. https://github.com/FluxML/NNlib.jl/pull/540, and seeing whether that PR speeds up this example might be helpful. (And if it does, finding a way to push that PR forwards).
Otherwise, isolating exactly which operations are slower would be more helpful than overall times. Xref earlier issue about the same thing https://github.com/FluxML/Flux.jl/issues/2300
will there be any updates?
Have you seen the linked PR at https://github.com/FluxML/NNlib.jl/pull/540? Other than contributing performance improvements to NNlib itself, best thing would be to do some benchmarking of what the bottlenecks in the Julia code are with a profiler. Ideally you could narrow it down to 1-2 types of layers which could be compared directly against their equivalents in PyTorch.
whatever it is, it's related to backward path, feed forward path is in flux is already faster than pytorch, or same speed at least
That's why I asked to narrow it down. If you can find which specific layers are slower on the backwards path and provide a MWE demonstrating that, then we have something to work with.
MWE
here are CPU tests i've not tested with GPU
FeedForward Flux:
using Flux
using BenchmarkTools
m = Conv((3, 3), 1 => 16; stride=(2, 2), pad=1)
A = Float32.(randn(28, 28, 1, 100))
# compile for the first time
m(A)
@btime m(A)
753.000 μs (76 allocations: 2.44 MiB)
FeedForward Pytorch:
import torch
import torch.nn as nn
m = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), stride=(2, 2), padding=1)
A = torch.randn((100, 1, 28, 28))
%timeit m(A)
172 µs ± 7.32 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
MWE
here are CPU tests i've not tested with GPU
FeedForward Flux:
using Flux using BenchmarkTools m = Conv((3, 3), 1 => 16; stride=(2, 2), pad=1) A = Float32.(randn(28, 28, 1, 100)) # compile for the first time m(A) @btime m(A)
753.000 μs (76 allocations: 2.44 MiB)
FeedForward Pytorch:
import torch import torch.nn as nn m = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), stride=(2, 2), padding=1) A = torch.randn((100, 1, 28, 28)) %timeit m(A)
172 µs ± 7.32 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
Flux is Significantly slower(almost 6 times) than Pytorch on CPU!!!
MWE
here are CPU tests i've not tested with GPU FeedForward Flux:
using Flux using BenchmarkTools m = Conv((3, 3), 1 => 16; stride=(2, 2), pad=1) A = Float32.(randn(28, 28, 1, 100)) # compile for the first time m(A) @btime m(A)
753.000 μs (76 allocations: 2.44 MiB) FeedForward Pytorch:
import torch import torch.nn as nn m = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), stride=(2, 2), padding=1) A = torch.randn((100, 1, 28, 28)) %timeit m(A)
172 µs ± 7.32 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
Flux is Significantly slower(almost 6 times) than Pytorch on CPU!!! Same Approch for Dense Layer Pytorch is 1.7 times Faster than Flux, Also RNNs in Flux are Significantly Slower just like the CNN than Pytorch(6 times slower) as we need to Loop Over Sequences
@aminaqi that's a different issue, namely https://github.com/FluxML/NNlib.jl/issues/234. As mentioned in that issue and the linked Discourse discussion, make sure you're starting Julia with multiple threads and using MKL for a proper apples-to-apples comparison with PyTorch.
For this issue, it's not clear where the exact slowdown(s) come from. What I'm sure of is that it can't be solely the conv forward pass, which is what you're benchmarking.
PS. it looks like the formatting on your comments got messed up? Every one quotes the entirety of the one before it and it probably shouldn't.
i've started julia with 6 threads, anyway even if i start julia with multi threads, it's still significantly slower than pytorch because that's only feedforward, we have a slowdown on backward too, which makes flux to be 10 times slower than pytorch or tensorflow also not only Conv, but RNNS also
Are you seeing Julia be 10x slower on the forward and backwards pass, for CNNs and RNNs, against PyTorch and TensorFlow? I'm pretty sure we are slower on all of those, but 10x for all of them would not be expected. If that's really what you're seeing, I'd recommend starting a Discourse thread with some MWEs for the various benchmarks and linking back to that here. It's possible that Flux itself is only a small part of the issue there, and Discourse will allow more folks to weigh in on what other parts of your code may be contributing (only Flux maintainers really follow this issue tracker).
Either way, the performance gap being discussed in this issue already has a reasonable benchmark. It just needs to be narrowed down to a couple of layers and/or profiled so we can see what the bottlenecks are to take action on them. If nobody has bandwidth to do that, then I'm not sure there's much else to discuss here.