Global seed set to 42
Using 16bit native Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Trainer(limit_train_batches=1.0) was configured so 100% of the batches per epoch will be used..
Global seed set to 42
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
distributed_backend=nccl
All distributed processes registered. Starting with 1 processes
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
0 | model | NeRFModel | 12.6 M
12.6 M Trainable params
0 Non-trainable params
12.6 M Total params
25.220 Total estimated model params size (MB)
Epoch 0: : 0it [00:00, ?it/s]
Global seed set to 42 Using 16bit native Automatic Mixed Precision (AMP) GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs
Trainer(limit_train_batches=1.0)
was configured so 100% of the batches per epoch will be used.. Global seed set to 42 Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1distributed_backend=nccl All distributed processes registered. Starting with 1 processes
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
0 | model | NeRFModel | 12.6 M
12.6 M Trainable params 0 Non-trainable params 12.6 M Total params 25.220 Total estimated model params size (MB) Epoch 0: : 0it [00:00, ?it/s]
after this line code got stuck