-
Hi,
Thanks for your idea and work. I would like to figure out how did you deal with unmatched sample size, in other words, two batch sizes are not the same.
I notice that in the MMD loss you used…
-
#ssd_pascal.py
batch_size = 2 # 32
accum_batch_size = 32
iter_size = accum_batch_size / batch_size
I don't understand the meaning of accum_batch_size here, and why it is equivalent to 32? If I t…
-
Hello,
Outside of the training function, I set:
```python
# define the hyperparameters for running the train function.
optimizer_ch2 = AdamW(model_ch2.parameters(), lr = lr, correct_bias = True)…
-
Hi! I'm a student learning CS285 online. Thank you for your great and generous work!
When I'm doing homework1 and running the same code in two different machines, one Linux and one Windows, I got t…
-
### Feature details
Would dev-team be interested in implementing [Quantum gradient descent](https://arxiv.org/pdf/1612.01789.pdf) optimization for the TensorFlow backend?
### Implementation
An impl…
-
为什么调整batch_size后mAP可以得到如此大的提升?
-
I have a dataset with around 400K observations -- I wanted to perform batch correction using sc.pp.combat, but I'm getting out of memory errors after running for a couple hours with > 2 TB of memory.…
-
Hi
Thank you for presenting your research. I have a question regarding the embedding_transform in inversion.py. As per my understanding, this function corresponds to the MLP model described in your…
-
## Global Parameters @janketj
- [ ] stop/ start training
- [ ] send updates on the progress after every batch/epoch
- [ ] send updates on accuracy/loss every x seconds/milliseconds
- [ ] learnin…
-
Thanks for setting this up. I am wondering the "bigdata.py". It appears to me the code does not use all the data from the "big data" population and only samples 8 points a time. That is no different t…