tomorrow1238 / Ultraman

33 stars 0 forks source link

No positive texture result #8

Open Thesaltedfish-single opened 5 days ago

Thesaltedfish-single commented 5 days ago

9 image image

I tried the entire code, but my front end is always blank and my mask looks normal

tomorrow1238 commented 3 days ago

Thank you for supporting our work and asking valuable questions. We need to do the export of the corresponding uv map first. You can check if the input has the following three files. 1. mesh_normalized.obj 2. mesh_normalized.png 3. mesh_normalized.mtl. If not, you can run our our run_mesh.sh to get it.

If you are using an obj file generated by another reconstruction method. I noticed that you should be using ECON here instead of 2k2k. you can run the last two lines of the command in our run_mesh.sh script to generate it.

The related commands are as follows: python scripts/generate_uv.py --input_dir ${obj_path} --obj_name decimation_mesh --output_dir ${obj_path} python scripts/normalize_mesh.py --input_dir ${obj_path} --obj_name mesh

Finally, thank you very much for your support.

Thesaltedfish-single commented 3 days ago

Actually, when I directly replaced 2k2k with ECON, I handed over the output result directly to

python scripts/decimation_mesh.py --input_path ${INPUT_PNG_DIR}/test/output_objs/${PNG_NAME}.obj --output_path ${FINAL_RESULT_PATH}
python scripts/generate_uv.py --input_dir ${FINAL_RESULT_PATH} --obj_name decimation_mesh --output_dir ${FINAL_RESULT_PATH}
python scripts/normalize_mesh.py --input_dir ${FINAL_RESULT_PATH} --obj_name mesh

The image I provided is the result of such processing. Thanks

tomorrow1238 commented 3 days ago

Do you have these three files after the process?

  1. mesh_normalized.obj 2. mesh_normalized.png 3. mesh_normalized.mtl.
Thesaltedfish-single commented 3 days ago

image

    print("=> initializing input arguments...")
    parser = argparse.ArgumentParser()
    parser.add_argument("--input_dir", type=str, default="/root/autodl-tmp/Ultraman/mesh_res")
    parser.add_argument("--output_dir", type=str, default="/root/autodl-tmp/Ultraman/final_res")
    parser.add_argument("--origin_image_path", type=str, default="/root/autodl-tmp/Ultraman/image/OIP_clean.png")
    parser.add_argument("--obj_name", type=str, default="mesh_normalized")
    parser.add_argument("--obj_file", type=str, default="mesh_normalized.obj")
    parser.add_argument("--prompt", type=str, default="a man, blue short hair,blue business suit, black slacks pants, black belt, brown dress shoes, mustache beard, standing")
    parser.add_argument("--a_prompt", type=str, default="best quality, high quality, extremely detailed, good geometry")
    parser.add_argument("--n_prompt", type=str, default="deformed, extra digit, fewer digits, cropped, worst quality, low quality, smoke, paintings, cartoon, anime, sketches, ugly, blurry, Tan skin, dark skin, black skin, skin spots, skin blemishes, age spot, glans, disabled, distorted, bad anatomy, morbid, inconsistent skin, bad shoes.")
    #parser.add_argument("--n_prompt", type=str, default="deformed, extra digit, fewer digits, cropped, worst quality, low quality, smoke, paintings, cartoon, anime, sketches, ugly")
    parser.add_argument("--new_strength", type=float, default=1)
    parser.add_argument("--ddim_steps", type=int, default=50)
    parser.add_argument("--num_inference_steps", type=int, default=50)
    parser.add_argument("--guidance_scale", type=float, default=7.5)
    parser.add_argument("--view_threshold", type=float, default=0.1)
    parser.add_argument("--viewpoint_mode", type=str, default="predefined", choices=["predefined", "hemisphere"])
    parser.add_argument("--seed", type=int, default=42)

    parser.add_argument("--use_unnormalized", action="store_true", help="save unnormalized mesh")
    parser.add_argument("--no_update", action="store_true", help="do NOT apply update")
    parser.add_argument("--add_view_to_prompt", action="store_true", help="add view information to the prompt")
    parser.add_argument("--post_process", action="store_true", help="post processing the texture")
    parser.add_argument("--smooth_mask", action="store_true", help="smooth the diffusion mask")

    # device parameters
    parser.add_argument("--device", type=str, choices=["person"], default="person")

    # camera parameters NOTE need careful tuning!!!
    parser.add_argument("--test_camera", action="store_true")
    parser.add_argument("--dist", type=float, default=0.7, 
        help="distance to the camera from the object")
    parser.add_argument("--elev", type=float, default=0,
        help="the angle between the vector from the object to the camera and the horizontal plane")
    parser.add_argument("--azim", type=float, default=180,
        help="the angle between the vector from the object to the camera and the vertical plane")

    args = parser.parse_args()

In order to debug the project, instead of using bash to call the script, I directly passed the parameters into the script manually and then debugged. This is the relevant parameter I used, and I am sure that these three files have been generated and I have set the corresponding path in generate_texture. py

Do I need to use the corresponding bash file?

Thanks

tomorrow1238 commented 3 days ago

You can check whether there are front view textures generated in mesh_normalized.png.

Thesaltedfish-single commented 3 days ago

(笑) Because the server storage space was too small, the mesh_normalized.png of ECON was not saved at that time. I have a mesh_normalized.png from 2k2k here, although the number of faces is different, it is the same as the case of ECON and does not have a positive texture. The situation of the two is similar, only the difference is that the ECON result is smaller and the number of faces is smaller。 image

Thesaltedfish-single commented 3 days ago

好的好的,谢谢

tomorrow1238 commented 3 days ago

I suspect that the uv map was not exported properly. 2k2k because the number of faces were too high. We did a process to lower the number of faces. If that's the result you provided for 2k2k. I think the uv map was not exported properly.

(笑) Because the server storage space was too small, the mesh_normalized.png of ECON was not saved at that time. I have a mesh_normalized.png from 2k2k here, although the number of faces is different, it is the same as the case of ECON and does not have a positive texture. The situation of the two is similar, only the difference is that the ECON result is smaller and the number of faces is smaller。 image

Thesaltedfish-single commented 2 days ago

Thank you for your answer!

However, I would like to add that this is the result of using the obj file of econ and scripting it through the generate_uv.Py script. The initial mesh is blank (of course, I am not sure why some of the colored dots are included) image

From the UV unfolded image of econ alone, I didn't see any problem with it (because econ itself outputs a white mold without texture) image

But I found that when running the generate_texture script, there were seams on its side, similar to the following result image image Is it possible that the reason for this result is due to my incomplete image cutout, as I am unable to use the API you provided, so I used other methods to complete the cutout. image

And when I use the original script, the front of my model is blank. Is it related to the content I modified

        #     generate_image = get_controlnet_depth_ipadapter_sdxl(origin_image, depth_map, prompt, args.n_prompt, 
        #             args.num_inference_steps, args.new_strength, args.guidance_scale, args.seed)

        if view_idx == 0:
            generate_image = init_image.convert("RGBA")
            generate_image_before = init_image.convert("RGBA")
            generate_image_after = init_image.convert("RGBA")

this result is like

image

tomorrow1238 commented 2 days ago

You can try to use meshlab to open the obj file which show in this image. This is because the output of ECON is colored by vertex not uv map. So at this image you cannot see the texture on it. You can check whether you can see the texture in this mesh when it opens in the meshlab.

image