Closed Enigma-li closed 2 years ago
We do have a plan to release the runnable example sometime next month. Meanwhile, you can follow the pipeline below to generate the semi-synthetic data:
get_distributions_from_dataset(data_dir, filter, split_file)
to get a list of distributions from the dataset. The split_file
, which is the train_test.json
in the test folder, can be found in the Reconstruction dataset you download. get_distribution_from_json(json_file)
. That's how we got d7_distributions.json
and d7_training_distributions.json
in the test folder.sketch_plane
to determine the starting sketch plane of each design and other distributions, like num_faces
or num_bodies
, to determine the ending point of each design. sample_design() -> sample_sketch() -> reconstruct_sketch() -> sample_profiles() -> add_extrude()
Hope this helps.
Thank for your kind reply, I am looking forward to the runnable examples and I will try your suggested steps.
Just now, I changed to the random_designer
branch, and found that in the fusion360gym/client folder, there a script called random_designer.py
that will generate some synthetic data.
Thus, I ran the code using the default configuration, but select a tiny part of the released dataset, the train_test split is shown below:
{
"train": [
"20203_7e31e92a_0000",
"20232_e5b060d9_0002",
"20241_6bced5ac_0000",
"20276_d041fa0f_0000",
"20342_c140780b_0000",
"20440_27177360_0001",
"20440_27177360_0002",
"20440_27177360_0003",
"20440_27177360_0004",
"20442_00c8a1db_0000",
"20506_17515038_0000",
"20535_f2420e23_0000",
"20591_20e06209_0000",
"20613_3b63ee83_0000",
"20722_90d46353_0000",
"20773_eb772a0a_0000"
],
"test": [
"20797_78b83489_0000",
"20849_9821b6fe_0000",
"20934_fe7fbd78_0000",
"20945_8b57f672_0000",
"20945_8b57f672_0001",
"20951_2918f2fa_0000",
"21028_d4d30a2e_0000",
"21121_1e9e7540_0000",
"21180_594c9e3e_0000",
"21186_d1698d8c_0000"
]
}
Currently, it is running now, and from the generated design folder, I got several files, the pattern would be one .f3d file, step json files, and a sequence json file:
Could the program generate the same JSON file as in the released dataset, containing the four parts, as shown below:
metadata
timeline
entities
properties
sequence
The json
that the random designer outputs is the graph representation we use (See regraph).
Unfortunately, right now we don't have any code to support export in the same json format we provide with the dataset.
Re: killing the server
What output do you get from python launch.py --detach
?
It sounds like it is set to run on startup?
Noted, will check the regraph
tool.
As for the killing, I started an instance via the python tool, then kill it, which unblock the UI. I guess when I close the UI directly, then I lost the server process, so that the direct detach
command does not work.
Re: json If there is a specific thing you are trying to accomplish, let us know. Perhaps there is some other way of solving it directly from the f3d data that is generated.
Re: killing the server
So you can check the launch.json
in the same directory as launch.py
. It will look something like this usually:
{
"http://127.0.0.1:8080": {
"host": "127.0.0.1",
"port": 8080,
"connected": true
}
}
This is the file that launch.py
uses to detach the server.
You can also manually move the directory of the repo. Then the next time you launch Fusion 360 it won't be able to run the plugin and will remove it. In general I tend to run things in debug mode (without 'run on startup') unless I'm doing some data processing. It makes it a little easier to know what is going on.
Re: json Thanks very much! I will definitely take this offer and will contact you if further help is needed.
Re: killing the server When adding the script into Fusion360, the 'run on startup' checkbox will be ticked automatically. As you said, manually move the directory is an easy way to solve.
One more problem I want to report is that, when I batch process the data, e.g., extract feature edges from new part based on the reconverter
script, every time the Fusion360 will crash when it executes about 684 json file. I guess there maybe a bug :)
Re: crashes
Yes there can be some instability, sometimes from specific action sequences, or sometimes due to memory issues over time. Take a look at launch.py
in the regraph example for one way to handle restarting Fusion 360 when it crashes or timesout. The basic steps are:
launch.py
so it is set the the correct json file. Also change the timeout limit -- this is used to stop processing if things take too long.launch.py
from the command line.Thanks very much, that's really helpful!
Hi,
One further question: other than the distributions, are the stroke constraints fulfilled in the generation of the semi-synthetic data? Some examples I generated are shown below, seems that the constraints, e.g., the concentricConstraint constraint, are not retained.
BTW, I used the first about 30 models as the example dataset, and the default setup in the generation script to get the result above.
Also, what does the CoincidentConstraint mean?
Currently, sketch_extrude_importer.py
doesn't apply the constraints in the json file at all. The Fusion 360 Gym, which we use to make the semi-synthetic data, also doesn't support adding constraints. So nothing in the semi-synthetic data will be constrained.
That said, it would be possible with the Fusion API to add a curve, then constrain it. Simple example here. As you can see it is pretty manual.
Re: CoincidentConstraint Take a look here for a nice visual explanation. https://www.nyccnc.com/guide-fusion-360-sketch-constraints/
Noted and thanks very much
Hi,
I am adding more semi-synthetic data by taking existing designs and modifying or recombining them, which refers to the tool mentioned in Section A.2.4 in the supplemental material. But the related code is only in the
test
folder and some configuration files are missing in the repo, e.g.,Is there a runnable example for the randomized reconstruction part?