-
So far, our learning rate is a fixed value, some paper and mllib are starting to use adaptive learning rate according to current iteration.
This is useful to decrease total iteration number.
-
I started training with multi-GPU by running following command:
`CUDA_VISIBLE_DEVICES=0,1 python train.py --train_data data_lmdb/training --valid_data data_lmdb/validation --Transformation TPS --Feat…
-
Dear All,
I succesfully compiled AutoDock GPU in a Debian system, but when trying to run an example:
./bin/autodock_gpu_128wi --lfile ./input/1stp/derived/1stp_ligand.pdbqt --ffile ./input/1stp…
-
# 정리해서 올릴 예정입니다
- ### SGD ( 기존 optimizer)
- ### Adam
- ### RMSprop
Junuu updated
5 years ago
-
When i train the model using the warp ctc,the ctc criterion return the loss that is inf.Is there anything wrong ?How could i solve this problem?
-
workers: 4
batch_size: 16
adam: False
lr: 1
beta1: 0.9
rho: 0.95
eps: 1e-08
grad_clip: 5
batch_ratio: 1
total_data_usage_ratio: 1.0
batch_max_length: 300
imgH: 32
imgW: 1024
rgb: True
…
-
Hello,
I just installed autodock-gpu on a ubuntu 20.04 (two 3080 cards, one CUDA version (11.5)) with "make DEVICE=GPU NUMWI=128" command.
"autodock_gpu_128wi" did appear in the bin directory.
B…
-
We are not the most careful with using the least amount of memory in our for-loop implementations of our optimizers. For example, we have `max_exp_avg_sqs[i].copy_(torch.maximum(max_exp_avg_sqs_i, exp…
-
The idea is to allow for more than the generic gradient descent algorithm when it comes to network weight optimization. When building a network the user should be allowed to specify which optimization…
-
Thanks for the amazing work
Can you kindly tell me what will be the customised function for accuracy. I want to use accuracy as a metric instead of dice-coeff