Closed cloudlakecho closed 6 years ago
First of all thanks for checking out my repo. I just uploaded a sample params.json. Could you give it a try and tell me what it says.
@micmelesse
Thanks for uploading params.json
I tried and I get this error
.
.
.
recurrent_module
decoder
(?, 8, 8, 8, 128)
(?, 8, 8, 8, 128)
(?, 8, 8, 8, 128)
(?, 8, 8, 8, 128)
(?, 8, 8, 8, 128)
(?, 8, 8, 8, 128)
(?, 16, 16, 16, 128)
(?, 16, 16, 16, 128)
(?, 16, 16, 16, 128)
(?, 32, 32, 32, 64)
(?, 32, 32, 32, 64)
(?, 32, 32, 32, 64)
(?, 32, 32, 32, 32)
(?, 32, 32, 32, 32)
(?, 32, 32, 32, 32)
(?, 32, 32, 32, 2)
loss
misc
optimizer
metrics
setup
2018-06-18 17:41:38.925204: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
summaries
initialize
...done!
training loop
Traceback (most recent call last):
File "/anaconda2/envs/tdreconnn/lib/python3.6/site-packages/numpy/lib/shape_base.py", line 463, in array_split
Nsections = len(indices_or_sections) + 1
TypeError: object of type 'int' has no len()
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run.py", line 87, in <module>
X_train, y_train, train_params["BATCH_SIZE"])
File "/3D-reconstruction-with-Neural-Networks/lib/dataset.py", line 125, in shuffle_batchs
data_batchs = np.array_split(data[perm], num_of_batches)
File "/anaconda2/envs/tdreconnn/lib/python3.6/site-packages/numpy/lib/shape_base.py", line 469, in array_split
raise ValueError('number sections must be larger than 0.')
ValueError: number sections must be larger than 0.
I am suspicious on the line at run.py
# get preprocessed data
data, label = dataset.load_preprocessed_dataset()
When I check the data and label variable, they are empty.
I put data in data and data_preprocessed (group of BINVOX files) folders.
Do I miss something when I put data in the folders or setting?
Try running sh scripts/preprocess_dataset.sh
. It should check if the dataset exists and if not download and preprocess the dataset. I updated the readme file. Let me know how it goes.
@micmelesse
Thanks for the instruction, but I don't think preprocess_dataset.sh
can go through network.
I actually download manually dataset from these two links; ShapeNet rendered images http://cvgl.stanford.edu/data2/ShapeNetRendering.tgz ShapeNet voxelized models http://cvgl.stanford.edu/data2/ShapeNetVox32.tgz
If you indicate where I have to put the extracted files, I could do manually. Should be under?
3D-reconstruction-with-Neural-Networks/data/ShapeNetVox32
3D-reconstruction-with-Neural-Networks/data/ShapeNetRendering
then I should run dataset.preprocess_dataset()
function in dataset.py
?
Yes, that seems about right. I am trying to confirm that from my end but let me know if that works for you.
@micmelesse Thanks for answer.
I followed this step and it looks like I am able to run bash scripts/train.sh
What is way to initialize the tensor in
params.json
file?I try to train the network by
bash scripts/train.sh
command, but I have an error during tensor initialization.Error Message
For the information, this is my params.json file: