Open zoldaten opened 6 months ago
Sorry, I forgot to make my diff-gaussian-rasterization
public, knn should work though. You can try again.
ok. i git cloned it. built by hands each module - python setup.py install
.
how to inference ? should i train new model first ?
if so i cant start train -
python train.py -s C:\Users\{user}\Desktop\2\Deblur-GS\exblur_release\bench --eval
it asks me to open port ?! if i deny i starts new session and crash:
train.py 437 <module>
training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations,
train.py 66 training
scene = Scene(dataset, gaussians)
__init__.py 60 __init__
assert False, "Could not recognize scene type!"
AssertionError:
Could not recognize scene type!
ps. i have also install visdom, torchmetrics, easydict, downgrade protobuf==3.20.*
Sorry, I forgot to make my
diff-gaussian-rasterization
public, knn should work though. You can try again.
knn is not public though?
ok. i git cloned it. built by hands each module -
python setup.py install
.how to inference ? should i train new model first ?
if so i cant start train -
python train.py -s C:\Users\{user}\Desktop\2\Deblur-GS\exblur_release\bench --eval
it asks me to open port ?! if i deny i starts new session and crash:
train.py 437 <module> training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, train.py 66 training scene = Scene(dataset, gaussians) __init__.py 60 __init__ assert False, "Could not recognize scene type!" AssertionError: Could not recognize scene type!
ps. i have also install visdom, torchmetrics, easydict, downgrade protobuf==3.20.*
Some issue with the dataset, I replace with a new one.
Sorry, I forgot to make my
diff-gaussian-rasterization
public, knn should work though. You can try again.knn is not public though?
knn might be the path issue, you can update the repo and try again
still not working train:
training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations,
train.py 66 training
scene = Scene(dataset, gaussians)
__init__.py 84 __init__
midas = torch.hub.load(
hub.py 566 load
model = _load_local(repo_or_dir, model, *args, **kwargs)
hub.py 592 _load_local
hub_module = _import_module(MODULE_HUBCONF, hubconf_path)
hub.py 106 _import_module
spec.loader.exec_module(module)
<frozen importlib._bootstrap_external> 879 exec_module
<frozen importlib._bootstrap_external> 1016 get_code
<frozen importlib._bootstrap_external> 1073 get_data
FileNotFoundError:
2
No such file or directory
C:\home/cwb/.cache/torch/hub/intel-isl_MiDaS_master\hubconf.py```
still not working train:
training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, train.py 66 training scene = Scene(dataset, gaussians) __init__.py 84 __init__ midas = torch.hub.load( hub.py 566 load model = _load_local(repo_or_dir, model, *args, **kwargs) hub.py 592 _load_local hub_module = _import_module(MODULE_HUBCONF, hubconf_path) hub.py 106 _import_module spec.loader.exec_module(module) <frozen importlib._bootstrap_external> 879 exec_module <frozen importlib._bootstrap_external> 1016 get_code <frozen importlib._bootstrap_external> 1073 get_data FileNotFoundError: 2 No such file or directory C:\home/cwb/.cache/torch/hub/intel-isl_MiDaS_master\hubconf.py```
I set midas model to local for efficient. It is fixed I think.
ok, i trained model with:
python train.py -s C:\Users\{user}\Desktop\2\Deblur-GS\exblur_release\bench --eval
got in output folder events.out.tfevents.1715678047.ws5411.864.0
- 1Gb.
executed python render_video.py -m C:\Users\{user}\Desktop\2\Deblur-GS\output\4744193f-5
got novel_view folder with depth and rgb dirs.
but in rgb dir i do not see any deblurred images:
may be i trained wrong as initial i got bench dir like downloaded:
but there was no any error messages while training ...
ok, i trained model with:
python train.py -s C:\Users\{user}\Desktop\2\Deblur-GS\exblur_release\bench --eval
got in output folderevents.out.tfevents.1715678047.ws5411.864.0
- 1Gb.executed
python render_video.py -m C:\Users\{user}\Desktop\2\Deblur-GS\output\4744193f-5
got novel_view folder with depth and rgb dirs.but in rgb dir i do not see any deblurred images:
may be i trained wrong as initial i got bench dir like downloaded:
but there was no any error messages while training ...
Have you run render.py? Not render_video.py but render.py?
Have you run render.py? Not render_video.py but render.py?
yes, i did that before render_video.py:
python render.py -m <path to trained model> # Generate renderings
python metrics.py -m <path to trained model> # Compute error metrics on renderings
ok, i trained model with:
python train.py -s C:\Users\{user}\Desktop\2\Deblur-GS\exblur_release\bench --eval
got in output folderevents.out.tfevents.1715678047.ws5411.864.0
- 1Gb.executed
python render_video.py -m C:\Users\{user}\Desktop\2\Deblur-GS\output\4744193f-5
got novel_view folder with depth and rgb dirs.but in rgb dir i do not see any deblurred images:
may be i trained wrong as initial i got bench dir like downloaded:
but there was no any error messages while training ...
You should use command line parameter --deblur to enable deblur mode
ok, i trained model with:
python train.py -s C:\Users\{user}\Desktop\2\Deblur-GS\exblur_release\bench --eval
got in output folderevents.out.tfevents.1715678047.ws5411.864.0
- 1Gb. executedpython render_video.py -m C:\Users\{user}\Desktop\2\Deblur-GS\output\4744193f-5
got novel_view folder with depth and rgb dirs. but in rgb dir i do not see any deblurred images: may be i trained wrong as initial i got bench dir like downloaded: but there was no any error messages while training ...You should use command line parameter --deblur to enable deblur mode
Perhaps I should set true as default.
i started training but it seems no cuda utilization ( it will take 18 hours to train.
although in output in cfg_args it is presented:
Namespace(sh_degree=3, source_path='C:\\Users\\___\\Desktop\\2\\Deblur-GS\\exblur_release\\bench', model_path='./output/59ea1996-6', images='images', resolution=-1, white_background=False, data_device='cuda', bezier_order=7, mode='Linear', eval=True, visdom_server='127.0.0.1', visdom_port=2333)
ok. got it!
after all i join images from novel_view\rgb to video:
ffmpeg -framerate 24 -i %05d.png Project.mp4
looks good!
https://github.com/Chaphlagical/Deblur-GS/assets/29411817/f80f445e-6c4c-4a67-9001-43a6cc1faacc
but could you provide how to make custom dataset ? is it hard enough ?
Use a colmap?
hi! i do not use conda so trying :
git clone --recursive https://github.com/Chaphlagical/Deblur-GS.git
but failed with submodules: diff-gaussian-rasterization and knn.please help!