drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
601 stars 75 forks source link

custom dataset problem #33

Closed hansoogithub closed 1 year ago

hansoogithub commented 1 year ago

sorry i have errors as below by running train.py using a modified the kitti-360 dataset, i did it by simply modifying the manifest of where the train,val,test files point to, and renaming kitti-360 to another instance in the configuration files how do i solve this? thank you

Traceback (most recent call last):
  File "/superpoint_transformer/src/utils/utils.py", line 45, in wrap
    metric_dict, object_dict = task_func(cfg=cfg)
  File "src/train.py", line 114, in train
    trainer.fit(model=model, datamodule=datamodule, ckpt_path=cfg.get("ckpt_path"))ying
  File "/root/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 532, in fit
    call._call_and_handle_interrupt(
  File "/root/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 43, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/root/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 571, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/root/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 980, in _run
    results = self._run_stage()
  File "/root/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1023, in _run_stage
    self.fit_loop.run()
  File "/root/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 198, in run
    self.on_run_start()
  File "/root/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 316, in on_run_start
    call._call_callback_hooks(trainer, "on_train_start")
  File "/root/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 195, in _call_callback_hooks
    fn(trainer, trainer.lightning_module, *args, **kwargs)
  File "/root/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/callbacks/lr_monitor.py", line 107, in on_train_start
    raise MisconfigurationException(
lightning_fabric.utilities.exceptions.MisconfigurationException: Cannot use `LearningRateMonitor` callback with `Trainer` that has no logger.
drprojects commented 1 year ago

Hi, it is hard to tell from this. I am guessing your configs are not properly set up. The error message is telling you that your Trainer is missing a logger.

I can provide support for the officially released code, but unfortunately do not have time to provide support for errors arising on modified versions of the code.

Before making modifications to the project, I invite you to get familiar with how the released code works, and in particular how the lightning-hydra template works.

Good luck

PS: if you are interested in this project, don't forget to give it a ⭐, it matters to us !