Open joshhansen opened 5 months ago
Looking at the experiment.log
the problem seems to come from the validation layer of Vulkan, not from a multi-device error. I tested on my system and I can run the training with multiple devices. Maybe you can try to disable the validation layer of Vulkan (branch wgpu-no-validation
).
Also, you could test using the LibTorch
backend instead.
Training does appear to work with the LibTorch GPU backend, with multiple GPUs specified. That may not be much use to me though - I am specifically migrating away from libtorch due to its lack of thread safety.
Running on the wgpu-no-validation
branch surprisingly results in the same validation error:
experiment.log
@joshhansen My intuition would suggest that the problem may come from a precision error, where wgpu can't convert the literal to a float32. If you change that value, does it work?
Change 0.00000000023283064365386963f
? My apologies, I'm not familiar with Burn's compilation process, where would that value "live" such that I could modify it?
@joshhansen I guessed it was a constant defined by your code 😅
Describe the bug Running the text classification example's ag news training step on multiple discrete GPUs fails with "Shader validation error":
This error overlaps some with the one in #1088.
To Reproduce On a system with two or more discrete GPUs:
Edit examples/ag-news-train.rs like so:
cargo run --example ag-news-train --features wgpu
Expected behavior The training proceeds, utilizing both GPUs.
Desktop (please complete the following information):