EdwardSmith1884 / GEOMetrics

Repo for the paper "GEOMetrics: Exploiting Geometric Structure for Graph-Encoded Objects"
MIT License
117 stars 12 forks source link

Problem #5

Closed dclcs closed 4 years ago

dclcs commented 4 years ago

When I run the data_prep.py , I have a problem : found bundled python: /media/dcl/Data1/blender/2.79/python Read blend: /media/dcl/Data1/paper_code/GEOMetrics-master/scripts/manage.blend ( 0.0001 sec | 0.0001 sec) Importing OBJ 'data/objects/rifle/d442863599da17d951cfdb4c9f126c12.obj'... ( 0.0004 sec | 0.0003 sec) Parsing OBJ file... ( 0.0307 sec | 0.0304 sec) Done, loading materials and images... ProgressMaterial not found MTL: 'data/objects/rifle/model.mtl' ( 0.0313 sec | 0.0309 sec) Done, building geometries (verts:977 faces:4244 materials: 9 smoothgroups:1) ... ( 0.0667 sec | 0.0664 sec) Done. ( 0.0668 sec | 0.0667 sec) Finished importing: 'data/objects/rifle/d442863599da17d951cfdb4c9f126c12.obj' Progress: 100.00%

Writing: /tmp/manage.crash.txt 段错误 (核心已转储) Is the blender version problem? I am not familiar with blender.....

EdwardSmith1884 commented 4 years ago

you figured it out?

KnightOfTheMoonlight commented 4 years ago

@EdwardSmith1884 Which version of blender did you use?

EdwardSmith1884 commented 4 years ago

version 2.78 I think. I have not checked with the new 2.8 version, the scripts make not still work

KnightOfTheMoonlight commented 4 years ago

@EdwardSmith1884 Yes, 2.79b works for me too.

EdwardSmith1884 commented 4 years ago

great!

dysdsyd commented 4 years ago

I am also facing the same problem with blender 2.79b. After tracking down the problem, I found that it was coming from:

bpy.ops.object.join(ctx) line 40 of blender_convert.py

After running the following blender command independently:

blender scripts/manage.blend -b -P scripts/blender_convert.py -- data/objects/cabinet/ba17ef05393844dfcb7105765410e2d6.obj data/mesh_info/cabinet/ba17ef05393844dfcb7105765410e2d6 data/managable_objects/cabinet/ba17ef05393844dfcb7105765410e2d6.obj

I am getting:

Screen Shot 2020-04-13 at 2 57 06 PM

EdwardSmith1884 commented 4 years ago

Does This occur for every object? Or for even more than a few? Perhaps you could just ignore these objects in the dataset, my data-loader should handle their absence. Alternatively you could just ignore the blender step in the datamaking script, this only effects the latent loss which is turned off by default.

dysdsyd commented 4 years ago

This is occurring for all the objects, I am hoping to get the downscaled meshes for an experiment that's why I can't skip over this step. I believe this is a problem with the blender code, I am using blender 2.79b as I was unable to run 2.78 on my server.

EdwardSmith1884 commented 4 years ago

I am currently using blender 2.79 and this seems to work for me:

bpy.ops.import_scene.obj(filepath=obj_location)
obs = []
for ob in scene.objects:
    if ob.type == 'MESH' :
        obs.append(ob)

bpy.data.objects[ob.name].select_set(True)
bpy.context.view_layer.objects.active = ob
c = {}
c["object"] = c["active_object"] = bpy.context.object
c["selected_objects"] = c["selected_editable_objects"] = obs
bpy.ops.object.join(c)

can you let me know if this fixes your issue?

dysdsyd commented 4 years ago

Can you please share the updated blender_convert.py file? I am not very well versed with blender so, not exactly sure what lines to remove.

EdwardSmith1884 commented 4 years ago

I won't post a new file tonight as I don't have access to Blender right now but if you replace:

obs = []
for ob in scene.objects:
    if ob.type == 'MESH':
        obs.append(ob)
ctx = bpy.context.copy()
ctx['active_object'] = obs[0]
ctx['selected_objects'] = obs
ctx['selected_editable_bases'] = [scene.object_bases[ob.name] for ob in obs]
bpy.ops.object.join(ctx)
o = bpy.context.selected_objects[0]

# removes split normal, helps with decimation
bpy.context.scene.objects.active = o 

with

obs = []
for ob in scene.objects:
    if ob.type == 'MESH' :
        obs.append(ob)

bpy.data.objects[ob.name].select_set(True)
bpy.context.view_layer.objects.active = ob
c = {}
c["object"] = c["active_object"] = bpy.context.object
c["selected_objects"] = c["selected_editable_objects"] = obs
bpy.ops.object.join(c)
bpy.context.scene.objects.active = ob 

it should work, if not let me known and I'll send a file some time this week.

EdwardSmith1884 commented 4 years ago

This should work for newer versions of blender, at least it did for 2.81:

import os
import bpy
import bmesh
import scipy.io as sio
import sys
import numpy as np

# this whole file is to make managable obj files from overly large or samll ones
# this is done for the latent loss calcualtions only

def triangulate_edit_object(obj):
    me = obj.data
    bm = bmesh.from_edit_mesh(me)
    bmesh.ops.triangulate(bm, faces=bm.faces[:], quad_method='BEAUTY', ngon_method='BEAUTY')
    bmesh.update_edit_mesh(me, True)

# import arguements
model = sys.argv[-3]
location_info = sys.argv[-2]
location_obj = sys.argv[-1]

# import object
bpy.ops.import_scene.obj(filepath=model)
scene = bpy.context.scene

# join components of mesh
obs = []
for o in scene.objects:
    if o.type == 'MESH' :
        obs.append(o)

bpy.data.objects[o.name].select_set(True)
bpy.context.view_layer.objects.active = o
c = {}
c["object"] = c["active_object"] = bpy.context.object
c["selected_objects"] = c["selected_editable_objects"] = obs

bpy.ops.object.join(c)
bpy.context.view_layer.objects.active = o
bpy.ops.object.editmode_toggle()
bpy.ops.mesh.select_all(action='SELECT')
bpy.ops.mesh.customdata_custom_splitnormals_clear()
bpy.ops.mesh.remove_doubles()
bpy.ops.object.editmode_toggle()

# shrinking the mesh to be a uniform size
# no idea if this actually helps with training
# it does make objects smaller which makes loading them much quicker to load during traineing
# ideally all objects will be between 500 and 600 verts, but I allow between 400 and 700 verts
# the object is used during training regardless, but not for the latent loss
not_possible = False
num = float(len(o.data.vertices))
new_num = num
orig = num
full_ratio = .01
if num < 30:
    not_possible = True  # if its too small then subsampling doesnt work well

# if large then decimate to make the right size
# or at least try to
elif num > 550:
    for i in range(5):
        mod = o.modifiers.new(name='decimate', type='DECIMATE')
        mod.ratio = max(550. / num, full_ratio)
        full_ratio /= (mod.ratio)
        bpy.ops.object.modifier_apply(modifier=mod.name)
        o.modifiers.clear()
        if float(len(o.data.vertices)) < 550:
            new_num = float(len(o.data.vertices))
            break
        else:
            num = float(len(o.data.vertices))
        if i == 4 and float(len(o.data.vertices)) > 700:  # if it can't be made small enough then don't convert it
            not_possible = True
        new_num = float(len(o.data.vertices))
# if small then try to make it larger
elif num < 400:
    mod = o.modifiers.new(name="Remesh", type='REMESH')
    mod.octree_depth = 6
    mod.use_remove_disconnected = False
    bpy.ops.object.modifier_apply(modifier=mod.name)
    o.modifiers.clear()
    num = float(len(o.data.vertices))
    # and then shrink it again
    if num > 500:
        for i in range(5):
            mod = o.modifiers.new(name='decimate', type='DECIMATE')
            mod.ratio = 550. / num
            bpy.ops.object.modifier_apply(modifier=mod.name)
            o.modifiers.clear()
            if float(len(o.data.vertices)) < 600:
                break
            else:
                num = float(len(o.data.vertices))
    else:
        not_possible = True
print('-------------------------------------------------------------------------')
print(float(len(o.data.vertices)))
print('-------------------------------------------------------------------------')
if not_possible:
    exit()

# triangluate the object
bpy.ops.object.editmode_toggle()
bpy.ops.mesh.dissolve_limited()
triangulate_edit_object(o)
bpy.ops.object.editmode_toggle()

# now we record the object info

# get initial face info
me = o.data
adj_new = np.zeros((600, 600))
max_len = 0
faces = []
for poly in me.polygons:
    vs = []
    for loop_index in range(poly.loop_start, poly.loop_start + poly.loop_total):
        vs.append(me.loops[loop_index].vertex_index)
    faces.append(vs)

# get initial vertex info, and normal info if you want it (I dont)
bm = bmesh.new()
bm.from_mesh(me)
verts, normals = [0 for i in range(len(bm.verts))], [0 for i in range(len(bm.verts))]
for e, v in enumerate(bm.verts):
    verts[v.index] = v.co
    normals[v.index] = v.normal

# calculate adjacency matrix and final face, vertex and nromal infor
verts_map = {}
count = 0
for face in faces:
    v1, v2, v3 = face
    for v in face:
        if v not in verts_map:
            verts_map[v] = [count, verts[v], normals[v]]
            count += 1
adj = np.zeros((len(verts_map), len(verts_map)))
true_verts = np.zeros((len(verts_map), 3))
true_normals = np.zeros((len(verts_map), 3))
for e, face in enumerate(faces):
    v1, v2, v3 = face
    adj[verts_map[v1][0], verts_map[v1][0]] = 1
    adj[verts_map[v2][0], verts_map[v2][0]] = 1
    adj[verts_map[v3][0], verts_map[v3][0]] = 1
    adj[verts_map[v1][0], verts_map[v2][0]] = 1
    adj[verts_map[v2][0], verts_map[v1][0]] = 1
    adj[verts_map[v1][0], verts_map[v3][0]] = 1
    adj[verts_map[v3][0], verts_map[v1][0]] = 1
    adj[verts_map[v2][0], verts_map[v3][0]] = 1
    adj[verts_map[v3][0], verts_map[v2][0]] = 1
    faces[e] = [verts_map[v1][0], verts_map[v2][0], verts_map[v3][0]]

for _, info in verts_map.items():
    spot, position, normal = info
    true_verts[spot] = position
    true_normals[spot] = normal

for obj in bpy.data.objects:
    obj.select_set(False)
o.select_set(True)

# save updated object, and object info
bpy.ops.export_scene.obj(filepath=location_obj)
bpy.ops.export_scene.obj(filepath=location_obj)
sio.savemat(location_info, {'verts':np.array(true_verts), 
                'normals': np.array(true_normals), 
                'faces': np.array(faces),  
                'orig_adj': adj
                }
                )