AutodeskAILab / Fusion360GalleryDataset

Data, tools, and documentation of the Fusion 360 Gallery Dataset
Other
446 stars 54 forks source link

Method to create reconstruction json files #71

Closed hongfenglu closed 2 years ago

hongfenglu commented 3 years ago

I would like to create my own reconstruction datasest. I'm wondering if there is a tool to convert a design in .smt/.step/... format to a reconstruction json file. Thanks in advance!

karldd commented 3 years ago

Yes this is a good question. To process the dataset we do the conversion from the .f3d Fusion 360 native file format. B-Rep files (smt or step) won't have sufficient information in them, i.e. no construction history. Could you elaborate a little more about the input data and I can try and offer a solution.

Currently we haven't added the .f3d > json code in the repo but a lot of the methods it uses to serialize the various parameters are available here.

hongfenglu commented 3 years ago

Thank you for the reply! I'm creating designs with Add-In and can also export them in .f3d format. Is there any chance that the code will be provided?

karldd commented 3 years ago

Yes I think we can provide it. There is some work to do to untangle it from another codebase, but it would be worth doing I think. I'll leave this issue open and report back when we have some progress.

hongfenglu commented 3 years ago

Great! Thanks very much!

hongfenglu commented 3 years ago

Hi, may I ask if the code is ready? For a single design, I currently create graphs for each intermediate stage in the gym environment, and manually match face ids (different faces share one id if they are contained in the same face in the final graph), but there is problem with having different points/traimming_mask representations of faces that appear in intermediate design and final design. Also, different faces that are contained in one face in the final graph are considered as different nodes and have different features (representations) in an intermediate graph. Is there any way I can create the sequence of graphs as in your reconstruction dataset? Thanks so much!

karldd commented 3 years ago

Unfortunately it is going to take some time to get this code ported across. However, I'm thinking the issue you are seeing is one of 'splitting and merging' of B-Rep faces. And having that code may not necessarily help with the issue you are seeing.

Face Merging

Consider a design like this that is made in two steps. With the first step extruding the half circle profile and the second step extruding the rectangle profile. image

With the timeline marker at the first step we can see that all the faces, highlighted in blue, belong to that first extrude. image So this is what is registered in the json we provide in the dataset.

But when we move the timeline marker to the end and highlight the first extrude faces. image And then the second extrude faces. image We can see that the second extrude merges the face on the back. image This merged face now belongs to the second extrude. Solid modeling kernels do this intentionally to represent the model in the simplest way. Rather than have a lot of fragmented faces, users would rather have a single face in this scenario.

Face Splitting

Consider this design. image

The first step is a simple rectangle extrude. All faces belong to extrude one here (again highlighted in blue): image

The second step cuts a notch in the design. The new faces added are highlighted in blue. image

This looks sensible, but consider the top face. It has now been split into two faces. image This means that the faces created by extrude 1 are different at different points in the timeline. In this case there will be one additional face added after extrude 2.

Canonical Face Segmentation

What this means is the per extrude faces listed in the json, don't all add up to the sum of their parts because of how faces are split and merged. What can be done, is you can generate the design in Fusion, then capture the canonical face segmentation (as determined by Fusion) from the end of the timeline. I prototyped some code to do this here: https://github.com/AutodeskAILab/Fusion360GalleryDataset/blob/kddw/point-labels/tools/reconverter/reconverter.py And you can see the output here: https://github.com/AutodeskAILab/Fusion360GalleryDataset/blob/kddw/point-labels/tools/testdata/51022_47816098_0003_labels.json This code will produce a list of faces where the the individual steps combined will match the final design.

However, in your case if you are wanting to make a B-Rep graph overtime, this trick probably won't work as you need to actually have the timeline marker set at each step to access the actual geometry. So the graph you export at each step will have faces that get split and merged as shown above.

Sequential Graph Export

If you can live with splits and merges, you could take the reconverter code and at the sequential extrude export step: https://github.com/AutodeskAILab/Fusion360GalleryDataset/blob/master/tools/reconverter/reconverter.py#L81 export the graph: https://github.com/AutodeskAILab/Fusion360GalleryDataset/blob/master/tools/fusion360gym/server/command_export.py#L170 Then you will want to write out whatever information you need to json using the state of the design at each step. So in essence you re-processing the data to make a custom version.

Finally if you want to batch process a lot of data, I'd suggest using this script that handles relaunching Fusion if a crash or timeout occurs: https://github.com/AutodeskAILab/Fusion360GalleryDataset/blob/kddw/extrude-tool-export/tools/reconverter/launch.py

Thats a lot of information, so let me know if you need further clarification.

hongfenglu commented 3 years ago

Thanks so much for the detailed reply. My goal is to create the regraph dataset for training the regraphnet model as did in the paper. So my previous description for matching face ids is what I thought your code would do in my case. Say the second exmaple is used for training the regraphnet model, and assume that the end face of the first extrusion is the top face (node) that is splitted. Then is this node (top face) id presented in node_names_tar, and what would be the label_end_now for the first extrusion?

Without timeline json file, I don't think I can easily use reconverter. For server export command, may I use it while my add-in script is running for creating the designs?

karldd commented 3 years ago

My goal is to create the regraph dataset for training the regraphnet model as did in the paper.

In that case can you use the existing preprocessed data here? https://github.com/AutodeskAILab/Fusion360GalleryDataset/tree/master/tools/regraph#preprocessed-data

Then is this node (top face) id presented in node_names_tar, and what would be the label_end_now for the first extrusion?

I'm not sure I follow. But the uuids used in the dataset json and regraph are selected at random. So when you run regraph, it doesn't inherit the uuids from the json.

If we ran regrapgh on the second shape above it would look like this: image

The blue faces would be the start faces. And the faces with the x would be the end face. Note that for the second extrude (on the right), the end face simply needs to be parallel to the start face. Also note that the top face is not yet split, so there is only one face that can be chosen.

For server export command, may I use it while my add-in script is running for creating the designs?

So you can call any of the underlying commands used by the gym server directly. But the gym API itself isn't call-able. See reconverter.py for an example of how the underlying commands are called.