bcmi / GracoNet-Object-Placement

Official code for ECCV2022 paper: Learning Object Placement via Dual-path Graph Completion
MIT License
73 stars 6 forks source link

Inference #3

Closed azimjonn closed 1 year ago

azimjonn commented 1 year ago

How can do inference on my own images? I found the code to follow a little hard. I would be happy if someone can help me.

lulubbb commented 1 year ago

I have the same problem. Also, may I ask you, why do you need to pass the coordinates of the foreground into the model? Thanks!

lulubbb commented 1 year ago

@Siyuan-Zhou @ustcnewly

Siyuan-Zhou commented 1 year ago

@azimjonn @lulubbb The coordinates of the foreground are used to calculate the affine transformation function (see https://github.com/bcmi/GracoNet-Object-Placement/blob/713d6df5a41c738e2a5edf5db98b315046855b76/model.py#L85). If you would like to try your own images, you can prepare your .csv file like the format of '/test_data.csv' that includes the paths to foreground/background, and make inference using this .csv file. You might have to modify 'loader/base.py' and add a new 'mode_type' to indicate your special input. The 'mode type' is just the argument '--eval_type' in 'infer.py' when you perform inference to generate composite images. I will update the README for more detailed instructions.

XuGW-Kevin commented 1 year ago

Hello Zhou, I'm sorry that I also found it difficult to use user-defined images to do inference. Will an instruction be released? Thanks. Looking forward to it!

kts707 commented 1 year ago

@ustcnewly @Siyuan-Zhou I also find it difficult to do inference on my own images. It would be super helpful if you can release instructions on how to do this properly. Much appreciated!