Open vinayakumarr opened 7 years ago
Because the files are larger than 50MB, they are stored with git lfs
You need to install git lfs https://git-lfs.github.com/
then run
git lfs get
to download the files
Now it is giving different error when i tried to run the
sudo python train.py data/processed.h5 model.h5 --epochs 20
Using Theano backend.
Traceback (most recent call last):
File "train.py", line 65, in
This is due to a change in the Keras API, the parameter std has been changed to std_dev
Change the code and submit a pull request :)
Yes, I had corrected. I think you are using data and label as same in both train and test (in train.py line n0=54). Why? Also, you are giving the testing data as validation data? Is there any separate program to calculate the accuracy on test data set? I want to know whether the code does a classification or prediction?
According to me it is a kind of prediction, am i right?
I'm a lurker in this repo - I dont use the train/test code
You're right, the latter should be "data_test". In general, "train_gen.py" should be used instead, it should be less demanding on your machine.
I wouldn't call an autoencoder or a VAE as a classification or prediction. Instead, I would call it as representation learning, a la https://hips.seas.harvard.edu/blog/2013/02/04/predictive-learning-vs-representation-learning/
For the record, if you are using the latest version of TensorFlow with Keras, the API has changed std
=> stddev
One way to resolve the exception is to checkout / download / replace the data files.
Getting an error, when I tried to run
python preprocess.py data/smiles_500k.h5 data/processed_500.h5
File "preprocess.py", line 85, in -
operator, is not supported, use the ~
operator or the logical_not function instead.
When I tried to run a program by executing python preprocess.py data/smiles_50k.h5 data/processed.h5. it is generating an error. The detailed error is attached in the image. How to correct this?