-
Hi, I have used your SPP layer implement but I found that pytorch not support tensor with different size. So if I want to forward image to the net, I need to to set batch_size=1 like this:
```dataLoa…
-
Hi,
I have two questions about the way the log-likelihood is computed in the code.
1) For mini-batch training, the log-likelihood is multiplied by a factor of N/B, where N is the total number of…
-
-
How about a Matlab test case?
I tried to implement a Matlab version of AdaBelief and compare it with SGD with momentum at
https://github.com/pcwhy/AdaBelief-Matlab
I found that sometimes AdaBelie…
pcwhy updated
3 years ago
-
Hello,
I want to try `pix2latent` on the FFHQ dataset on Google Colab. Due to RAM constraints, Colab won't run the optimization process with CMA or BasinCMA (unless I use the cars dataset), so I ha…
-
**System information**
- TensorFlow version: 2.1
- Are you willing to contribute it: Yes
**Describe the feature and the current behavior/state.**
Currently, passing `clipnorm` to a `tf.keras.o…
-
Under Module 1 of the "Optimisation: Stochastic Gradient Descent" chapter, there are multiple instances where it is mentioned that W has 30730 parameters. But according to the previous chapter and the…
-
- 版本、环境信息:
PaddlePaddle版本:1.8.3 cuda 10 cudnn 7.6
根据官网的示例:https://www.paddlepaddle.org.cn/tutorials/projectdetail/593621
batch_loss应该先求均值再反向传播
但根据model项目中的示例(160,163行):https://github.com/Pad…
-
Hi Andrew,
It is really an excellent work to build a machine learning library for PHP. A big thank you for that.
Recently I am trying to use MLP Regressor in Rubix ML for a specific set of data…
-
Add description for gradient descent: batch gradient descent and stochastic gradient descent.
Optionally, also add details about mini-batch GD.