nv-tlabs / LION

Latent Point Diffusion Models for 3D Shape Generation
Other
735 stars 57 forks source link

Evaluationg other dataset #63

Open watashihageteru opened 7 months ago

watashihageteru commented 7 months ago

Hi, I've been trying to reproduce the results on the dataset other than ShapeNet. I didn't face too much difficulty during the training phase, but I'm struggling with the evaluation phase.

It seems that we need to create "datasets/test_data/xxxxxx.pt" for evaluating other dataset. Would it be possible to provide the script for generating "datasets/test_data/xxxxxx.pt"?

ZENGXH commented 7 months ago

this is my old script used to generate the test data for the 55 classes (the dataset initialization argument may need to align with the current dataset). Is this what you are looking for?

if __name__ == '__main__':
    #te_dataset = ShapeNet15kPointClouds(
    #    categories=['chair'], #cfg.cates,
    #    split='train', #eval_split,
    #    tr_sample_size=2048, #cfg.tr_max_sample_points,
    #    te_sample_size=2048, #cfg.te_max_sample_points,
    #    # normalize_per_shape=True,
    #    normalize_global=True,
    #)
    for cats in ['all']: ##, 'car', 'airplane']: 
        ## for s in ['train', 'val', 'test']:
        train_data = ShapeNet15kPointClouds(categories=[cats], split='train',
            tr_sample_size=2048, te_sample_size=2048,
            normalize_global=True)  

        data = ShapeNet15kPointClouds(categories=[cats], split='val',
            tr_sample_size=2048, te_sample_size=2048,
            all_points_mean=train_data.all_points_mean,
            all_points_std=train_data.all_points_std,
            normalize_global=True)  

        ref, m, s, uid = [], [], [], []
        for i, d in enumerate(data):
            ref.append(d['tr_points'])
            m.append(d['mean'])
            s.append(d['std']) 
            uid.append(d['oid'])

        ref_pcs, m_pcs, s_pcs = torch.from_numpy(np.stack(ref)), torch.from_numpy(np.stack(m)), torch.from_numpy(np.stack(s)) 
        logger.info('ref_pcs: {}', ref_pcs.shape) 
        np.random.seed(0) 
        N = 1000
        xperm = np.random.permutation(np.arange(ref_pcs.shape[0]))[:N] 
        ref_pcs = ref_pcs[xperm]
        m_pcs = m_pcs[xperm] 
        s_pcs = s_pcs[xperm]
        ref_name = '/workspace/data_cache_local/test_data/ref_ns_val_%s.pt'%cats 
        print('save as: ', ref_name)
        torch.save({'ref': ref_pcs, 'mean': m_pcs, 'std': s_pcs, 'obj_id': uid}, ref_name)
watashihageteru commented 7 months ago

Thank you very much for providing your script. When I ran the script in combination with "LION/datasets/dataset.py", I encountered errors in the following three places.

① ・The location where the error occurred:normalize_global=True) ・Error Messsage: TypeError: init() got an unexpected keyword argument 'normalize_global' ⇒I was able to run it without errors after removing 'normalize_global', but is that okay?

② ・The location where the error occurred:ref.append(d['tr_points']) ・Error Messsage:KeyError: 'tr_points' ⇒When I checked the contents of 'd,' I found a similarly named 'train_points.' Is it okay to replace 'tr_points' with 'train_points'?"

③ ・The location where the error occurred:uid.append(d['oid'] ・Error Messsage:KeyError: 'oid' ⇒When I checked the contents of 'datasets/test_data/ref_val_airplane.pt', I found that only three variables - 'ref,' 'mean,' and 'std' - were registered as dictionaries. So, Is it okay to comment out and ignore the above?"

ZENGXH commented 6 months ago

for 2 and 3: yes your change makes sense. for 1: the argument is for the init function is originally defined here: https://github.com/nv-tlabs/LION/blob/main/datasets/pointflow_datasets.py#L100C18-L100C34. If your datasets does not have this argument, it's ok to remove it.

noahcao commented 5 months ago

@watashihageteru Can you provide more details about how to train the model on other datasets? I tried to fine-tune it while facing some issues: https://github.com/nv-tlabs/LION/issues/68