-
Hi,
Your package seems very interesting!
I was wondering if you planned to support the case where your ODE is given as a torch.nn.Module?
This would be extremely useful and the parallel evaluations…
-
hi,Robert!
i have reproduced the peleenet in pytorch, hope it is widely used by others.
[pytorch-peleenet](https://github.com/wpf535236337/pytorch-peleenet)
-
Could you please kindly provide the versions of CUDA and PyTorch projects using?
Project needs Deformable-Convolution-V2 (https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch.git) But it …
-
While preparing the benchmark for eager and dynamo using the code from the fork: https://github.com/tfogal/NeMo I get errors for dynamo case.
## 🐛 Bug
Seems like `dynamo` stopped working for NeM…
-
Hi nerfstudio guys, thanks for your excellent library!
I have a minor question, it is usually required that we have a consistent system CUDA toolkit version and Pytorch runtime CUDA version to comp…
-
As of https://github.com/pytorch/pytorch/commit/21d4c48059478e6fe4871a09966a36ca3986a7ec, the following aten ops are not implemented by XPU backend which are required by https://github.com/dvrogozh/op…
-
Hi, I tried to play to test hyperparameter sweep with PyTorch Lightning with a minimum example (1-D regression with 1 layer hidden layer). The sweep appears to start but I keep getting error messages …
-
How to use PyTorch Hooks
- https://medium.com/the-dl/how-to-use-pytorch-hooks-5041d777f904
-
# pytorch分布式训练
### 并行策略
1. 分布式训练根据并行策略的不同,可以分为模型并行和数据并行。
- 模型并行:模型并行主要应用于模型相比显存来说更大,一块 GPU 无法加载的场景,通过把模型切割为几个部分,分别加载到不同的 GPU 上,来进行训练
- 数据并行:这个是日常会应用的比较多的情况。即每个 GPU 复制一份模型,将一批样本分为多份分发到各个GPU模型并行计算…
-
**Summary**
Since the weather community and especially ECMWF moved towards a single zarr archive that contains all the data in the state (domain), and one that contains all the data in the boundary, …