Open 0mil opened 5 months ago
You must first run prepare_data.py , it will create ours folder
@KeyuWu-CS
Thank you for your response!
While running python prepare_data.py --yaml=configs/reconstruct/big_wavy1
step following your instructions, I discovered that the ours/Occ3D.mat
files were not generated, leading to the following error when executing python infer_inner.py --yaml=configs/reconstruct/big_wavy1
step.
It is correct that the ours/Occ3D.mat
file should be generated after running prepare_data.py
successfully, right?
Error message:
The numpy.ndarray strands
is empty!
(MonoHair) me@ubuntu:/home/users/me/MonoHair$ python infer_inner.py --yaml=configs/reconstruct/big_wavy1
Process ID: 200078
setting configurations...
loading configs/reconstruct/base.yaml...
loading configs/reconstruct/big_wavy1.yaml...
* HairGenerate:
* connect_dot_threshold: 0.8
* connect_scalp: True
* connect_segments: True
* connect_threshold: 0.0025
* connect_to_guide: None
* dist_to_root: 6
* generate_segments: True
* grow_threshold: 0.85
* out_ratio: 0.35
* PMVO:
* conf_threshold: 0.15
* filter_point: True
* genrate_ori_only: None
* infer_inner: True
* num_sample_per_grid: 4
* optimize: True
* patch_size: 7
* threshold: 0.025
* visible_threshold: 1
* bbox_min: [-0.32, -0.32, -0.24]
* bust_to_origin: [0.006, -1.644, 0.01]
* camera_path: camera/calib_data/wky07-22/cam_params.json
* check_strands: True
* cpu: None
* data:
* Conf_path: conf
* Occ3D_path: ours/Occ3D.mat
* Ori2D_path: best_ori
* Ori3D_path: ours/Ori3D.mat
* bust_path: Bust/bust_long.obj
* case: big_wavy1
* depth_path: render_depth
* frame_interval: 7
* image_size: [1920, 1080]
* mask_path: hair_mask
* raw_points_path: ours/colmap_points.obj
* root: data
* scalp_path: ours/scalp_tsfm.obj
* strands_path: ours/world_str_raw.dat
* device: cuda:0
* gpu: 0
* image_camera_path: ours/cam_params.json
* infer_inner:
* render_data: True
* run_mvs: True
* name: 10-16
* ngp:
* marching_cubes_density_thresh: 3.0
* output_root: output
* prepare_data:
* fit_bust: None
* process_bust: True
* process_camera: True
* process_imgs: True
* render_depth: True
* run_ngp: True
* select_images: True
* save_path: refine
* scalp_diffusion: None
* seed: 0
* segment:
* CDGNET_ckpt: assets/CDGNet/LIP_epoch_149.pth
* MODNET_ckpt: assets/MODNet/modnet_photographic_portrait_matting.ckpt
* scene_path: None
* vsize: 0.005
* yaml: configs/reconstruct/big_wavy1
generate segments...
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
done...
Warning: 'strands' is empty. Nothing to save.
unable to load materials from: ./bust_long_c.obj.mtl
num of strands: 0
Traceback (most recent call last):
File "/home/users/me/MonoHair/infer_inner.py", line 71, in <module>
render_data(camera ,strands ,vertices ,faces ,[1280 ,720] ,os.path.join(args.data.root ,'imgs'))
File "/home/users/me/MonoHair/Utils/Render_utils.py", line 272, in render_data
renderStrands = StrandsObj(strands, Render.ctx)
File "/home/users/me/MonoHair/Utils/Render_utils.py", line 26, in __init__
self.Lines = np.concatenate(self.Lines, 0)
File "<__array_function__ internals>", line 180, in concatenate
ValueError: need at least one array to concatenate
3D Ori will be created in Output/experiment_name/refine/Ori.mat after run PMVO.py. And another important thing is you must notice if there are any step has error when run prepare_data.py. There are lots of steps, if one step has problem, it will have error info in the output, but the program will not stop. So you should notice it.
the full folder of our one driven has one case now, you can check it
Thank you for the information. I would make sure to monitor closely each step of 3D Hair Reconstruction process and recheck for any errors in the prepare_data.py
step. Thanks again for your assistance.
@KeyuWu-CS
I have one last question. Thank you in advance for your response.
What conversion should be used to import Cyhair file format(.hair
) generally into 3D software?
I have tried using several open-source converters to view the resulting connected_strands.hair
in Blender or Unreal, but they do not seem to work properly.
You can export to Blender and convert to particle system, then create a suitable render material to render different style hair. We will also provide a .exe program to visualize the .hair in a simple render maner in next one or two week. You also can visualize it simply with open3d.
@KeyuWu-CS
Thank you for your response, and I apologize for the continued questions.
I have already attempted to import the model's result file, big_wavy1/output/10-16/full/connected_strands.hair
, into Blender. However, Blender doesn't support the .hair
format.
Is there a way to import a .hair
file into Blender, perhaps by converting it to another format or by installing a plugin that supports .hair
file import?
Or is it possible that I am mistaken about the generated hair model file being connected_strands.hair
?
The results_directory
I think :
./big_wavy1_result/output/10-16/full
├── coarse.npy
├── coarse_ori.npy
├── connected_strands.hair # result "3D hair model" I think 🤔
├── num_root.npy
├── Occ3D.mat
├── Ori3D.mat
├── scalp_segment.hair
├── scalp_segment_smooth.hair
└── strands.hair
@KeyuWu-CS Thank you for your response, and I apologize for the continued questions. I have already attempted to import the model's result file,
big_wavy1/output/10-16/full/connected_strands.hair
, into Blender. However, Blender doesn't support the.hair
format. Is there a way to import a.hair
file into Blender, perhaps by converting it to another format or by installing a plugin that supports.hair
file import? Or is it possible that I am mistaken about the generated hair model file beingconnected_strands.hair
?The
results_directory
I think :./big_wavy1_result/output/10-16/full ├── coarse.npy ├── coarse_ori.npy ├── connected_strands.hair # result "3D hair model" I think 🤔 ├── num_root.npy ├── Occ3D.mat ├── Ori3D.mat ├── scalp_segment.hair ├── scalp_segment_smooth.hair └── strands.hair
Yep, "connected_strands.hair" is the finally results. Bur for visualization, Blender cannot directly use .hair file, it need some expertise to render realistic hair. It's not easy If you are not familiar with Blender. It needs to convert data formats, set up cameras, lighting materials, etc. You can ask for help from someone who is familiar with rendering hair in blender. Or just use open3D to visualize the results, this may not show its geometry well. You also can wait a few days, we will release a simple visualization tools.
@KeyuWu-CS
Oh, I see. I want to create a script to convert the .hair
format file to others such as .obj
, .abc
, or .ply
.
So, is this document(http://www.cemyuksel.com/research/hairmodels/) for the hair format you used? If not, can I get a any HAIR File Format Specification you used for implementing the utilities?
You can find the store format of .hair in https://github.com/KeyuWu-CS/MonoHair/blob/eaadc74d0e24210658741ba49ec8d59a6f5e6a5f/Utils/Utils.py#L25
@KeyuWu-CS Oh, I see. I want to create a script to convert the
.hair
format file to others such as.obj
,.abc
, or.ply
. So, is this document(http://www.cemyuksel.com/research/hairmodels/) for the hair format you used? If not, can I get a any HAIR File Format Specification you used for implementing the utilities?
I have implemented this code to convert .hair
format in this repo to .ply
.
import os, sys
sys.path.append('.')
import numpy as np
from Utils.Utils import load_strand
import open3d as o3d
def WriteStrandsToPly(fdst, strands, ignore_invalid=False):
assert fdst.endswith('ply'), f'Error in write_strands2ply, Invalid fdst `{fdst}`...'
pc_all_valid = []
lines = []
sline = 0
for i in range(len(strands)):
if ignore_invalid:
if len(strands[i]) == 1: continue
for j in range(len(strands[i])):
pc_all_valid.append(strands[i][j])
if j == len(strands[i]) - 1: continue ## for the last node of a strand: not save index
lines.append([sline + j, sline + j + 1])
sline += len(strands[i])
line_set = o3d.geometry.LineSet(points=o3d.utility.Vector3dVector(np.asarray(pc_all_valid)), \
lines=o3d.utility.Vector2iVector(lines))
o3d.io.write_line_set(fdst, line_set)
hair_fpath = r'connected_strands.hair'
segments, points, strands, ori = load_strand(hair_fpath, True)
print(len(strands), len(segments))
WriteStrandsToPly(hair_fpath[:-5] + '.ply', strands)
@moranli-aca
segments = struct.unpack('H' * num_strand, segments) struct.error: unpack requires a buffer of 2761065104 bytes
Hi, I'm trying to convert .hair file to .ply and used the code suggested by you. But I'm getting the above error while unpacking.
Please help me regarding this.
@moranli-aca
segments = struct.unpack('H' * num_strand, segments) struct.error: unpack requires a buffer of 2761065104 bytes
Hi, I'm trying to convert .hair file to .ply and used the code suggested by you. But I'm getting the above error while unpacking.
Please help me regarding this.
You can extract the line set from .ply
via open3d.io.read_line_set(input_ply_dir)
, which will return the points and segments of all strands.
@moranli-aca
segments = struct.unpack('H' * num_strand, segments) struct.error: unpack requires a buffer of 2761065104 bytes
Hi, I'm trying to convert .hair file to .ply and used the code suggested by you. But I'm getting the above error while unpacking. Please help me regarding this.You can extract the line set from
.ply
viaopen3d.io.read_line_set(input_ply_dir)
, which will return the points and segments of all strands.
But, I don't have .ply file yet. I want to convert .hair file to .ply file and getting error in that process.
@moranli-aca
segments = struct.unpack('H' * num_strand, segments) struct.error: unpack requires a buffer of 2761065104 bytes
Hi, I'm trying to convert .hair file to .ply and used the code suggested by you. But I'm getting the above error while unpacking. Please help me regarding this.You can extract the line set from
.ply
viaopen3d.io.read_line_set(input_ply_dir)
, which will return the points and segments of all strands.But, I don't have .ply file yet. I want to convert .hair file to .ply file and getting error in that process.
You can use this function https://github.com/KeyuWu-CS/MonoHair/blob/master/Utils/Utils.py#L25 to load the .hair
file, as mentioned in previous answers. Actually, trying this script might solve your problem.
import os, sys
sys.path.append('.')
import numpy as np
# load_strand is the function to load `.hair` in this repo
# at https://github.com/KeyuWu-CS/MonoHair/blob/master/Utils/Utils.py#L25
from Utils.Utils import load_strand
import open3d as o3d
def WriteStrandsToPly(fdst, strands, ignore_invalid=False):
assert fdst.endswith('ply'), f'Error in write_strands2ply, Invalid fdst `{fdst}`...'
pc_all_valid = []
lines = []
sline = 0
for i in range(len(strands)):
if ignore_invalid:
if len(strands[i]) == 1: continue
for j in range(len(strands[i])):
pc_all_valid.append(strands[i][j])
if j == len(strands[i]) - 1: continue ## for the last node of a strand: not save index
lines.append([sline + j, sline + j + 1])
sline += len(strands[i])
line_set = o3d.geometry.LineSet(points=o3d.utility.Vector3dVector(np.asarray(pc_all_valid)), \
lines=o3d.utility.Vector2iVector(lines))
o3d.io.write_line_set(fdst, line_set)
hair_fpath = r'connected_strands.hair'
segments, points, strands, ori = load_strand(hair_fpath, True)
print(len(strands), len(segments))
WriteStrandsToPly(hair_fpath[:-5] + '.ply', strands)
@moranli-aca
segments = struct.unpack('H' * num_strand, segments) struct.error: unpack requires a buffer of 2761065104 bytes
Hi, I'm trying to convert .hair file to .ply and used the code suggested by you. But I'm getting the above error while unpacking. Please help me regarding this.You can extract the line set from
.ply
viaopen3d.io.read_line_set(input_ply_dir)
, which will return the points and segments of all strands.But, I don't have .ply file yet. I want to convert .hair file to .ply file and getting error in that process.
You can use this function https://github.com/KeyuWu-CS/MonoHair/blob/master/Utils/Utils.py#L25 to load the
.hair
file, as mentioned in previous answers. Actually, trying this script might solve your problem.import os, sys sys.path.append('.') import numpy as np # load_strand is the function to load `.hair` in this repo # at https://github.com/KeyuWu-CS/MonoHair/blob/master/Utils/Utils.py#L25 from Utils.Utils import load_strand import open3d as o3d def WriteStrandsToPly(fdst, strands, ignore_invalid=False): assert fdst.endswith('ply'), f'Error in write_strands2ply, Invalid fdst `{fdst}`...' pc_all_valid = [] lines = [] sline = 0 for i in range(len(strands)): if ignore_invalid: if len(strands[i]) == 1: continue for j in range(len(strands[i])): pc_all_valid.append(strands[i][j]) if j == len(strands[i]) - 1: continue ## for the last node of a strand: not save index lines.append([sline + j, sline + j + 1]) sline += len(strands[i]) line_set = o3d.geometry.LineSet(points=o3d.utility.Vector3dVector(np.asarray(pc_all_valid)), \ lines=o3d.utility.Vector2iVector(lines)) o3d.io.write_line_set(fdst, line_set) hair_fpath = r'connected_strands.hair' segments, points, strands, ori = load_strand(hair_fpath, True) print(len(strands), len(segments)) WriteStrandsToPly(hair_fpath[:-5] + '.ply', strands)
I am sorry, that I am unable to explain you my problem.
I want to say that I am using this code only to write .ply files, but at last third line when it calls load_strand() function it is called from the utils.py file. In that file at line number 36 (as shown in image at last) when it's trying to unpack segment then it throws exception as :-
segments = struct.unpack('H' * num_strand, segments) struct.error: unpack requires a buffer of 2761065104 bytes
How to solve that?
Sorry for inconvenience.
@PNeelkanth16 I also write some script to convert the .hair format to .ply (Stanford Polygon). I think that this would work on your environment well. Try the script below.
import numpy as np
import struct
def load_strand(file):
with open(file, mode='rb') as f:
num_strand = f.read(4)
(num_strand,) = struct.unpack('I', num_strand)
point_count = f.read(4)
(point_count,) = struct.unpack('I', point_count)
segments = f.read(2 * num_strand)
segments = struct.unpack('H' * num_strand, segments)
segments = list(segments)
num_points = sum(segments)
points = f.read(4 * num_points * 3)
points = struct.unpack('f' * num_points * 3, points)
f.close()
points = list(points)
points = np.array(points)
points = np.reshape(points, (-1, 3))
return segments, points
def save_ply(filename, points):
num_points = len(points)
with open(filename, 'w') as f:
# Write the header
f.write('ply\n')
f.write('format ascii 1.0\n')
f.write(f'element vertex {num_points}\n')
f.write('property float x\n')
f.write('property float y\n')
f.write('property float z\n')
f.write('end_header\n')
# Write the vertex points
for point in points:
f.write(f'{point[0]} {point[1]} {point[2]}\n')
def convert_hair_to_ply(hair_file, ply_file):
_, points = load_strand(hair_file)
save_ply(ply_file, points)
if __name__ == "__main__":
hair_file = "{HAIR_FILE_Path}" ## Write the `.hair` file's path
ply_file = "{DESTINATION_PATH}" ## Write the `.ply` file's path which you want
convert_hair_to_ply(hair_file, ply_file)
@PNeelkanth16 I also write some script to convert the .hair format to .ply (Stanford Polygon). I think that this would work on your environment well. Try the script below.
import numpy as np import struct def load_strand(file): with open(file, mode='rb') as f: num_strand = f.read(4) (num_strand,) = struct.unpack('I', num_strand) point_count = f.read(4) (point_count,) = struct.unpack('I', point_count) segments = f.read(2 * num_strand) segments = struct.unpack('H' * num_strand, segments) segments = list(segments) num_points = sum(segments) points = f.read(4 * num_points * 3) points = struct.unpack('f' * num_points * 3, points) f.close() points = list(points) points = np.array(points) points = np.reshape(points, (-1, 3)) return segments, points def save_ply(filename, points): num_points = len(points) with open(filename, 'w') as f: # Write the header f.write('ply\n') f.write('format ascii 1.0\n') f.write(f'element vertex {num_points}\n') f.write('property float x\n') f.write('property float y\n') f.write('property float z\n') f.write('end_header\n') # Write the vertex points for point in points: f.write(f'{point[0]} {point[1]} {point[2]}\n') def convert_hair_to_ply(hair_file, ply_file): _, points = load_strand(hair_file) save_ply(ply_file, points) if __name__ == "__main__": hair_file = "{HAIR_FILE_Path}" ## Write the `.hair` file's path ply_file = "{DESTINATION_PATH}" ## Write the `.ply` file's path which you want convert_hair_to_ply(hair_file, ply_file)
@0mil
I'm getting the same error as I mentioned earlier.
Traceback (most recent call last):
File "D:\Python\Extra\hairToPly.py", line 49, in <module>
convert_hair_to_ply(hair_file, ply_file)
File "D:\Python\Extra\hairToPly.py", line 43, in convert_hair_to_ply
points = load_strand(hair_file)
File "D:\Python\Extra\hairToPly.py", line 12, in load_strand
segments = struct.unpack('H' * num_strand, segments)
struct.error: unpack requires a buffer of 2761065104 bytes
Thank you for sharing this awesome project. However, I think there might be something missing to make it run correctly. I already downloaded the
data_processed
folder, but there is nodata/big_wavy1/ours/
folder. Please check the following error message: