-
I can not find any config about the learning rate in config.lua,and the current learning rate also can not be display at each iteration during training.Do you have any idea about it?
Thanks!
-
So far, our learning rate is a fixed value, some paper and mllib are starting to use adaptive learning rate according to current iteration.
This is useful to decrease total iteration number.
-
If you try "Stateful LSTMs, Stacked" with following parameters, you may get quick (may be better) solution in terms of frequency and phase (not amplitude),
batch_size = 1
model = Sequential()
…
-
Hi,
I compiled Autodock-GPU with :
`make DEVICE=GPU NUMWI=128`
When I run the example:
`./bin/autodock_gpu_64wi --ffile ./input/1stp/derived/1stp_protein.maps.fld --lfile ./input/1stp/deriv…
-
# Optimizers
- 可以先实例化一个优化器对象,然后将它传入model.compile(),如下示例:
```
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)
```
-…
-
Hi everyone,
i followed the instruction on how to train a new model,but I didn't understand well how should I create my own database and run the following command ? can somebody explain this to me?
…
-
# 정리해서 올릴 예정입니다
- ### SGD ( 기존 optimizer)
- ### Adam
- ### RMSprop
Junuu updated
5 years ago
-
When i train the model using the warp ctc,the ctc criterion return the loss that is inf.Is there anything wrong ?How could i solve this problem?
-
Hello,
I just installed conx and my tensorflow version is 2.4.0.
When I try to import conx with import conx as cx, I get the follow error message.
-------------------------------------------…
-
I hace almost 25W images to digital recognition by hand , 5W are real and others are Simulation。 It overfitting very fast.
both adam(lr = 0.001) and Adadelta(lr = 1) are overfitting very fast.
both …