-
Hi LP!
I stumbled on your repo here since we're building our own alternative to backprop at aolabs.ai, using weightless neural networks.
Unless I'm missing something, MNIST has only 70k samples …
-
Thanks for sharing the code. It seems like you did not change num_class and num_point in `train.py`. Since num_class is 10 in mnist dataset and default num_point I believe should be 2048.
-
In order to reproduce the results, could you please provide the MNIST dataset split and files?
-
I am curious how the classification setting works. You mention in your paper that you use the cross entropy loss.
Do you use as final layer a softmax? How do you propagate the variance through the…
-
Hey Charles, I am working with Prof Hao now. Hardly any object has more than 100 points with intensity>0.5 so there is just a lot of augmented points. I am unable to reproduce the said results in the …
-
sorry but you are using the original mnist not fashion-mnist
```python
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data1/", one_hot=True, so…
-
**Describe the bug**
In the Experimental aggregator-based workflow, private attributes of the aggregator object are passed to the collaborator step. This behavior does not occur in `LocalRuntime`.
…
-
### What happened?
When I run the following scripts:
```
import kubeflow.katib as katib
def train_mnist_model(parameters):
import tensorflow as tf
import kubeflow.katib as katib
…
-
-
I've been experimenting with the Parametric UMAP using the example notebooks. However, I'm encountering an issue where the loss values I'm getting are significantly higher than in notebooks. This has …