Closed howard84527 closed 5 years ago
Sorry, got half way through resolving these issues on dev and made changes to dependencies on master and took an extended weekend right as this opened. I've just finished the fixes and merged dev to master. Afraid you'll have to re-pull dependencies too (dids, util3d).
A couple of issues for updating that I'll leave you to resolve:
model/mobilenet.py
../clear_results.py MODEL_ID
.Feel free to post back if that doesn't resolve things.
Hi, I appreciated your time and effort. I find a slight error when I run " clear_results.py ".
if name == 'main': ^ IndentationError: unindent does not match any outer indentation level.
I align the code started from " if name == 'main': " in the " config.py " and then solve the problem.
Now I actually remove the b_plane model and I need to run the training again,right?
You shouldn't need to retrain if you add "use_bn_bugged_version": true
to the bottom of the params file, though if results look like absolute rubbish retraining would probably be the first step I'd take to debugging. The keras update introduced many breaking changes, so it's not impossible.
Again, if you have trouble importing the hacked mobilenet file you may want to revert just that file to the previous commit. I haven't actually tested that, but there shouldn't be any issues with that. It's almost exactly the same as the tf.keras.applications.mobilenet
script just without the error checks that allow non-square input sizes.
Sorry to bother you all the time, I retrained and got the loss around 9-11. I still have the same evaluation error as before, so I want to do other things.
Here are a few questions I want to ask you.
Where can I visual the deformed airplanes? Are there stored in the "b_plane.hdf5" file in _inference/predictions folder?
How to render my own obj data? I can successfully run "render_cat.py my_data_id " but there is empty in the generated folder "_image". I also tried to imported my obj file into meshlab and re-exported the new obj file, and then run " render_cat.py my_data_id -f " but I got the information below:
Rendering 0 images for cat 12345678
Thank you very much.
All good, it's nice having someone find bugs for me :).
scripts/vis_predictions.py
lets you visualize the predictions of your model (target and deformed template) lazily - i.e. it makes the predictions and visualizes the results, rather than saving them anyway.. If you want to generate/save the meshes, I've just added a scripts/save_inferred_meshes.py
script (and fixed a bug with the non-lazy dataset). Note the generated hdf5 file will generally be very large - ~3gb for evaluation planes for a single view with edge_length_threshold = 0.02
(default). The hdf5 file will be located at inference/_inferences/meshes/0.02/MODEL_ID.hdf5
or w/ever the edge length threshold you use. You may have to export data to .obj
if you want to visualize it with an external tool. Something like:
from template_ffd.inference.meshes import get_inferred_mesh_dataset
from util3d.mesh.obj_io import write_obj
import numpy as np
import os
model_id = 'e_plane_fixed' edge_length_threshold = 0.02 out_dir = '/tmp/%s' % model_id if not os.path.isdir(out_dir): os.makedirs(out_dir)
with get_inferred_mesh_dataset( model_id, edge_length_threshold, lazy=True) as ds: for key in ds: cat_id, example_id, view_id = key example = ds[key] vertices = np.array(example['vertices']) faces = np.array(example['faces']) path = os.path.join(out_dir, '%s.obj' % example_id) write_obj(path, vertices, faces)
Note this will be even larger than the hdf5 file if you generate a .obj for each example.
2. My shapenet blender renderings is ripped from [here](https://github.com/panmari/stanford-shapenet-renderer) - I've just put a wrapped around it to extract obj files, render them then delete the objs, looping over all examples. If you only want to do it for a few, check out the original repo and use the command line. If you want to do the same thing for a large number you'll probably have to write your own wrapper. If you've saved files in a similar format/structure to the shapenet then you can probably base it heavily on mine. See `render_obj` in `shapenet/core/blender_renderings/scripts/render_cat.py`.
Regarding evaluation error: have you cleaned all the intermediate files? ./clear_results.py MODEL_ID -a
?
Yeah, I have ran " /clear_results.py b_plane -a " and I got the information below:
Removing file /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/predictions/b_plane.hdf5 Removing file /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/cloud/presampled/1024/b_plane.hdf5 Removing file /home/howard/TF_14/FFD-template/template_ffd/eval/_eval/chamfer/presampled/1024/b_plane.json Removing subdir /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/voxels/base/d032ttt/b_plane/filled Removing subdir /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/voxels/base/d032ttt/b_plane/unfilled Removing subdir /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/voxels/0.1/d032ttt/b_plane/filled Removing subdir /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/voxels/0.1/d032ttt/b_plane/unfilled Removing subdir /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/voxels/0.05/d032ttt/b_plane/filled Removing subdir /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/voxels/0.05/d032ttt/b_plane/unfilled Removing file /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/meshes/0.02/b_plane.hdf5 Removing subdir /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/voxels/0.02/d032ttt/b_plane/filled Removing file /home/howard/TF_14/FFD-template/template_ffd/eval/_eval/iou/0.02/d032ttt/filled/b_plane.json Removing subdir /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/voxels/0.02/d032ttt/b_plane/unfilled Removing subdir /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/voxels/0.01/d032ttt/b_plane/filled Removing subdir /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/voxels/0.01/d032ttt/b_plane/unfilled
My tensorflow version is 1.4.1, so I use old code. I also added the "use_bn_bugged_version": true to the bottom of the b_plane.json file in the model/params folder.
Interesting. It sounds like there's an error generating the data based on that datasets somewhere along the line, but rather than throwing it's just creating an empty dataset, which propagates through.
After trying to run the chamfer evaluation, which of the following are occupied? Meaning, there's a file there, and it's more than 800 bytes /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/predictions/b_plane.hdf5 /home/howard/TF_14/FFD-template/template_ffd/inference/_inferences/cloud/presampled/1024/b_plane.hdf5 /home/howard/TF_14/FFD-template/template_ffd/eval/_eval/chamfer/presampled/1024/b_plane.json ?
I can successfully run "python infer.py b_plane", but I will get error when I run " scripts/iou.py ", " scripts/chamfer.py " and " scripts/ffd_emd.py ".
The b_plane.hdf5 in the inference/_inferences/predictions folder has 19523(KB) The b_plane.hdf5 in the inference/cloud/presampled/1024 folder has 19609(KB) The b_plane.json in the eval/_eval/chamfer/presampled/1024 folder has 1(KB)
Interesting. Might be to do with your ground truth point cloud... though I would have thought that would have screwed with your training.
What's the result of running the following:
#!/usr/bin/python
from template_ffd.eval.chamfer import get_chamfer_manager
from shapenet.core.point_clouds import get_point_cloud_dataset
cat_id = '02691156'
n_samples = 16384
model_id = 'b_plane'
with get_point_cloud_dataset(cat_id, n_samples) as ds:
print('gt len: %d' % len(ds))
manager = get_chamfer_manager(model_id)
with manager.get_lazy_dataset() as ds:
print('lazy len: %d' % len(ds))
gt len: 4045 lazy len: 0
Getting close. How about:
from template_ffd.eval.normalize import get_normalization_params_dataset
cat_id = '02691156'
with get_normalization_params_dataset(cat_id) as ds:
print(len(ds))
803
... well there goes that hypothesis.
from template_ffd.eval.chamfer import get_chamfer_manager
cat_id = '02691156'
model_id = 'b_plane'
manager = get_chamfer_manager(model_id)
with manager.get_inferred_cloud_dataset() as ds:
print('inferred len: %d' % len(ds))
uuuuurrrggh. Have you repulled shapenet repo recently?
inferred len: 803
Sorry I don't understand what do you means.
Run git pull
from where you cloned the shapenet repo into (my repository, not the dataset itself).
I have ran and it show Already up-to-date.
... ... ... I'm really confused now. Can you repull both this and shapenet (I just pushed to each - though I'm pretty sure the changes won't make any difference). Then post the output to the following?
#!/usr/bin/python
from template_ffd.eval.chamfer import get_chamfer_manager
from shapenet.core.point_clouds import get_point_cloud_dataset
from template_ffd.eval.normalize import get_normalization_params_dataset
from template_ffd.eval.point_cloud import get_lazy_evaluation_dataset
from template_ffd.data.ids import get_example_ids
from dids.core import Dataset
cat_id = '02691156'
n_samples = 16384
model_id = 'b_plane'
example_ids = get_example_ids(cat_id, 'eval')
print('example id len: %d' % len(example_ids))
def report(id_, dataset):
print(id_, tuple(dataset.keys())[0], len(dataset))
gt_cloud_ds = get_point_cloud_dataset(cat_id, n_samples)
manager = get_chamfer_manager(model_id)
inf_cloud_ds = manager.get_inferred_cloud_dataset()
lazy_chamfer_ds = manager.get_lazy_dataset()
lazy_eval_ds = get_lazy_evaluation_dataset(
inf_cloud_ds, cat_id, n_samples, lambda a, b: None)
normalization_ds = get_normalization_params_dataset(cat_id)
with inf_cloud_ds:
keys = tuple(inf_cloud_ds.keys())
report('inf_cloud_ds', inf_cloud_ds)
with gt_cloud_ds:
report('gt cloud', gt_cloud_ds)
with normalization_ds:
report('normalization', normalization_ds)
normalization_ds = normalization_ds.map_keys(
lambda key: key[:2])
gt_cloud_ds = gt_cloud_ds.map_keys(lambda key: key[:2])
zipped = Dataset.zip(
inf_cloud_ds, gt_cloud_ds, normalization_ds).subset(
keys, check_present=False)
with lazy_eval_ds:
report('lazy eval ds', lazy_eval_ds)
with zipped:
report('zipped', zipped)
with lazy_chamfer_ds:
report('lazy chamfer', lazy_chamfer_ds)
I have ran git pull and it show below: remote: Counting objects: 5, done. remote: Compressing objects: 100% (2/2), done. remote: Total 5 (delta 3), reused 5 (delta 3), pack-reused 0 Unpacking objects: 100% (5/5), done. From https://github.com/jackd/shapenet f5afe22..b16aa15 master -> origin/master Updating f5afe22..b16aa15 Fast-forward core/point_clouds/dataset.py | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-)
The result is following:
example id len: 803 ('inf_cloud_ds', u'10155655850468db78d106ce0a280f87', 803) ('gt cloud', u'10155655850468db78d106ce0a280f87', 4045) ('normalization', u'92306925325c9e8a3f3cc56bc9bfcde3', 803) ('lazy eval ds', u'92306925325c9e8a3f3cc56bc9bfcde3', 803) ('zipped', u'92306925325c9e8a3f3cc56bc9bfcde3', 803) ('lazy chamfer', u'92306925325c9e8a3f3cc56bc9bfcde3', 803)
... do you still get the evaluation error when you run chamfer.py
?
I've no idea why your keys are a single string - early versions of the code did that, but I updated the code to allow a single model to infer multiple categories/views. Make sure you're on the latest version of the master branch, but if you can't make that middle value a cat_id, example_id, view_str
tuple then a dirty hack fix may be just to remove the filter in the final averaging function that checks the cat_id/view_str
I successfully get the chamfer distance: 0.03496484720364081. Does the value means comparing the point cloud of input shape with the point cloud of deformed shape? Thank you very much.
Sounds about right - though you may get a better score with more training time. Aye, that's the inferred model (i.e. deformed template) vs ground truth (from which the rendering is made).
Appreciating your help. Now I running " scripts/ffd_emd.py " and I only get error from " scripts/iou.py " The error below:
File "/home/howard/TF_14/FFD-template/template_ffd/data/voxels.py", line 19, in map_fn voxels.axis_order) AttributeError: 'RleVoxels' object has no attribute 'axis_order'
Thank you.
You must be in an old branch or something mate. That's not what the up-to-date voxel.py line 19 has. I rewrote the voxel code a while back and the current master branch has no references to axis_order other than some unrelated visualisation stuff
OK, I will update the voxel.py. I ran the " save_inferred_meshes.py " but I got an error below:
File "/home/howard/TF_14/FFD-template/dids/core.py", line 6, in _get_progress from progress.bar import IncrementalBar ImportError: No module named progress.bar
I have ran pip install progressbar but that didn't solve the problem.
Thank you.
Pip install progress
Sorry to ask such a stupid question. I got new error when I ran " scripts/save_inferred_meshes.py " and " scripts/iou.py ".
Saving mesh data model_id: b_plane edge_length_threshold: 0.02
| | 1/803Traceback (most recent call last):
File "save_inferred_meshes.py", line 21, in
Traceback (most recent call last):
File "iou.py", line 29, in
What's the latest commit id on your git log
?
commit 6ab1b338d6a754b3d426f847046b81d9dc35b968 Author: Dominic Jack thedomjack@gmail.com Date: Thu May 17 13:37:32 2018 +1000
create params dir if not already there
commit ed6ed5a69ce6917d5d145ee51ef8a88717213a02 Author: Dominic Jack thedomjack@gmail.com Date: Thu May 17 12:09:38 2018 +1000
autosave fix
commit 4fc02d59f2c5537a11df0ce1cf828f8a34d72a9b Author: Dominic Jack thedomjack@gmail.com Date: Thu May 17 08:45:56 2018 +1000
Fixed evaluation generation bugs
commit fa57769be07f0c5c8776dfb154b3ce52f25ccb98 Author: Dominic Jack thedomjack@gmail.com Date: Thu May 17 08:45:36 2018 +1000
updated paper link
commit c176094a6c99a225a16a7c7c0ed418df0e213116 Author: Dominic Jack thedomjack@gmail.com Date: Wed May 16 17:30:04 2018 +1000
added explicit numpy type
commit 6252bb6234c271aab054af9633caf00352e2968d Author: Dominic Jack thedomjack@gmail.com Date: Tue Apr 3 07:52:55 2018 +1000
Added arxiv paper reference
commit 2ace76cf2ee8d144110eeda2a289e82ae52ce18f Author: Dominic Jack thedomjack@gmail.com Date: Wed Mar 28 08:48:42 2018 +1000
Initial commit
Well that explains a lot. Pull the latest changes, because you're missing a fuck ton.
commit 6ab1b338d6a754b3d426f847046b81d9dc35b968 Author: Dominic Jack thedomjack@gmail.com Date: Thu May 17 13:37:32 2018 +1000
create params dir if not already there
Is this what you mean? Thank you.
It tells me you haven't received any of the changes I've made since May. I've made a lot, and I'm not going to assist you in debugging bugs I've already fixed.
I really appreciate your help.
Now I git pull the commit cca147f0695ac222a14bf773095e8719ad5f5232 Author: Dominic Jack thedomjack@gmail.com Date: Fri Jul 27 23:36:14 2018 +1000
I use the old commit mobilenet/py but I just can run scripts/infer.py. Here are the detailed description of the errors.
" python iou.py " :
File "/home/howard/TF_14/FFD-template/dids/nest.py", line 24, in _nested_keys for key in group.keys(): AttributeError: 'Dataset' object has no attribute 'keys'
" python chamfer.py " :
File "/home/howard/TF_14/FFD-template/dids/nest.py", line 37, in _nested_values for value in group.values(): AttributeError: 'float' object has no attribute 'values'
" python ffd_emd.py " :
for subval in _nested_values(value, depth-1): File "/home/howard/TF_14/FFD-template/dids/nest.py", line 37, in _nested_values for value in group.values(): AttributeError: 'float' object has no attribute 'values'
I also have some questions about scripts/vis_predictions.py. I run this and it shows three pictures: input templates, derformed templates and ground truths. Are the input templates came from training data(4045) or evaluating data(803) ? I think the evaluating data shouldn't have the ground truths because we have trained weight parameter and apply it to deforming the template. How we know the deformed result if we apply the weight on the evaluating data?
Thank you very much.
Have you cleaned/regenerated the inference results? And are you sure you've pulled the most recent versions of dids
, util3d
and shapenet
repos?
Oh shit I forgot.... I successfully got the value 0.666606984630216 for iou. I also got error when I ran "chamfer.py" and "ffd_emd.py".
"chamfer.py":
return KeyError('key %s not a valid key' % key) TypeError: not all arguments converted during string formatting
"ffd_emd.py":
File "/home/howard/TF_14/FFD-template/dids/nest.py", line 37, in _nested_values for value in group.values(): AttributeError: 'float' object has no attribute 'values'
The most recent commit are below:
"dids":
commit b2b0eeee040673c613eb9bb315d48082b625ce45 Author: Dominic Jack thedomjack@gmail.com Date: Mon Jul 23 20:42:56 2018 +1000
nested fixes
"util3d":
commit 61fba00df4be97a6539f6ddee59000902116d932 Author: Dominic Jack thedomjack@gmail.com Date: Mon Jul 23 20:43:30 2018 +1000
voxel dataset can now handle nested directories
"shapenet":
commit b16aa157f7363d7b030e334ea32ee4bd3a90cb5a Author: Dominic Jack thedomjack@gmail.com Date: Fri Jul 27 23:27:06 2018 +1000
changed single cat_id to produce normal ds, rather than bikey
Thank you
Alright, that one's on me. Re-pull this repo (and dids
if you want more meaningful error messages very occasionally).
I got the different error when I ran " chamfer.py ".
Creating Chamfer data
model_id: b_plane
n_samples: 1024
| | 1/803Traceback (most recent call last):
File "chamfer.py", line 40, in
I also got same error as before when I ran " ffd_emd.py ". I have repulled the repos(dids, shapenet, util3d) to the latest.
Thank you very much.
Did you repull this repo as well?
Sorry you mean repulling the " chamfer.py " and " ffe_emd.py " only? Thank you
No, I mean pulling everything. I don't work on files in isolation, and if you want my help you need to take the changes as I make them.
Hi, I can successfully run the all evaluation but I get the value: 666.6 when I run " ffd_emd.py". I will run it again.
Thank you for your help.
Ah, yeah, sorry, changed emd implementations - the first auto-scaled by the number of points, the second didn't. It's just too big by a factor of 1024 (the number of points) if I remember correctly... I'll look into it.
Should be fixed in the latest branch, but I know it takes forever to generate those numbers. The papers I compare against use an approximate algorithm, but they don't release the code... so I'm using an exact version. Anyway, if you haven't cleaned the results, I'm almost certain they'll just be too big by a factor of 1024, or possibly 2048. Let me know if the result is still wildly different from the numbers reported in the paper (assuming your chamfer distance is roughly the same)
Should I git pull to latest commit and run the" ffd_emd " again?
I haven't changed it anything to do with it in the last few days. Latest commit just (hopefully) made mobilenet hack work with earlier versions of tensorflow (version >= 1.4 - old version of code required 1.8+)
The result value is 322.7233347554819
I've changed the emd implementation. I switched to a little-used library close to paper submission thinking it might be faster- it wasn't noticably, though I never switched back. Now back to using pyemd.
For emd implementation, optas also has a version here which you can have a look if you didn't before. Maybe it will help you.
Hi, I have trained the b_plane model but I got some errors when I ran " python iou.py " and " python chamfer.py "
Here is the detailed description of the error. Running " python iou.py " :
Creating filled voxels | | 1/803Traceback (most recent call last): File "iou.py", line 32, in
args.overwrite)
File "iou.py", line 13, in create_and_report
iou.report_iou_average(kwargs)
File "/home/howard/TF_14/FFD-template/template_ffd/eval/iou.py", line 129, in report_iou_average
filled=filled) as ds:
File "/home/howard/TF_14/FFD-template/template_ffd/eval/iou.py", line 123, in get_iou_dataset
manager.save_all()
File "/home/howard/TF_14/FFD-template/dids/auto_save.py", line 77, in save_all
with self.get_auto_saving_dataset('a') as ds:
File "/home/howard/TF_14/FFD-template/dids/auto_save.py", line 69, in get_auto_saving_dataset
self.get_lazy_dataset(),
File "/home/howard/TF_14/FFD-template/template_ffd/eval/iou.py", line 92, in get_lazy_dataset
filled=self._filled, example_ids=example_ids)
File "/home/howard/TF_14/FFD-template/template_ffd/inference/voxels.py", line 31, in get_voxel_dataset
create_voxel_data(example_ids=example_ids, overwrite=False, kwargs)
File "/home/howard/TF_14/FFD-template/template_ffd/inference/voxels.py", line 104, in create_voxel_data
_create_filled_voxel_data(**kwargs)
File "/home/howard/TF_14/FFD-template/template_ffd/inference/voxels.py", line 90, in _create_filled_voxel_data
unfilled, dst, message=message, overwrite=overwrite)
File "/home/howard/TF_14/FFD-template/template_ffd/data/voxels.py", line 23, in create_filled_data
dst.save_dataset(src, overwrite=overwrite, message=message)
File "/home/howard/TF_14/FFD-template/dids/core.py", line 204, in save_dataset
value = dataset[key]
File "/home/howard/TF_14/FFD-template/dids/core.py", line 477, in getitem
return self._map_fn(self._base[key])
File "/home/howard/TF_14/FFD-template/template_ffd/data/voxels.py", line 19, in map_fn
voxels.axis_order)
AttributeError: 'RleVoxels' object has no attribute 'axis_order'
Running " python chamfer.py ":
RuntimeWarning: Mean of empty slice. out=out, **kwargs) /home/howard/TF_14/local/lib/python2.7/site-packages/numpy/core/_methods.py:80: RuntimeWarning: invalid value encountered in double_scalars ret = ret.dtype.type(ret / rcount)