nie-lang / StabStitch

ECCV2024 - Eliminating Warping Shakes for Unsupervised Online Video Stitching
Apache License 2.0
39 stars 4 forks source link

About train.py code #3

Open 2000Zzw opened 2 months ago

2000Zzw commented 2 months ago

Hello, I want to train my own datasets, it is greatly appreciated that you would give the training codes.

nie-lang commented 2 months ago

Hi, thanks for your interest. The related training code of StabStitch has not been organized. Actually, I'm working on the extension version of StabStitch with better alignment, fewer distortions, and higher stability. We plan to release the complete code of the extension (including training and testing codes) once this work is done, whether the extended paper is accepted or not.

2000Zzw commented 2 months ago

Thanks for your Reply.

2000Zzw commented 2 months ago

Hi, thanks for your interest. The related training code of StabStitch has not been organized. Actually, I'm working on the extension version of StabStitch with better alignment, fewer distortions, and higher stability. We plan to release the complete code of the extension (including training and testing codes) once this work is done, whether the extended paper is accepted or not.

I have some questions on the code. 1、When I run the 'test_online.py' code, I found the output of the '_transform' in the 'torch_tps_transform.py' is a full zero matrix. So I normalize the 'x_s' and 'y_s'. I'm not sure if it is the correct operation.

image

2、I found it takes a lot of time to generate one stitching video(I ran it for a night but without a stitching video has been generate in the 'result'. ).

nie-lang commented 2 months ago

Hi, thanks for your interest. The related training code of StabStitch has not been organized. Actually, I'm working on the extension version of StabStitch with better alignment, fewer distortions, and higher stability. We plan to release the complete code of the extension (including training and testing codes) once this work is done, whether the extended paper is accepted or not.

I have some questions on the code. 1、When I run the 'test_online.py' code, I found the output of the '_transform' in the 'torch_tps_transform.py' is a full zero matrix. So I normalize the 'x_s' and 'y_s'. I'm not sure if it is the correct operation.

image

2、I found it takes a lot of time to generate one stitching video(I ran it for a night but without a stitching video has been generate in the 'result'. ).

Please make sure the input video frames and models are correctly loaded. I tested it on a 4090 GPU and it could run in real-time.

2000Zzw commented 2 months ago

Thanks! It could work!

liujiaocv commented 1 month ago

I am extremely grateful for your outstanding work. May I inquire about the specific date for the release of the training code?

nie-lang commented 1 month ago

I am extremely grateful for your outstanding work. May I inquire about the specific date for the release of the training code?

If all things go well, we will release the complete code for the extended version in October.

yfclark commented 1 month ago

@nie-lang The open-source project is excellent, and I am currently testing it. There are some areas where the code could be improved:

(1) Directly constructing tensors on CUDA can be slightly faster than first constructing them on the CPU and then moving them to the GPU.

(2) The test_online logic still has some issues. The spatial warp and temporal warp should be processed in a rolling manner, rather than processing the entire video in one batch and then passing it to the next step. Lastly, I look forward to seeing the code for the training part at an earlier stage.

nie-lang commented 1 month ago

@yfclark

@nie-lang The open-source project is excellent, and I am currently testing it. There are some areas where the code could be improved:

(1) Directly constructing tensors on CUDA can be slightly faster than first constructing them on the CPU and then moving them to the GPU.

(2) The test_online logic still has some issues. The spatial warp and temporal warp should be processed in a rolling manner, rather than processing the entire video in one batch and then passing it to the next step. Lastly, I look forward to seeing the code for the training part at an earlier stage.

First of all, thanks for your suggestions. As for the logic of "test_online", we first calculate all the spatial/temporal warps in the whole videos to determine the final size of the stitched video. Then we warp all frames with this size. Actually, we can process the videos in one batch and then pass it to the next step as you said. But this way, we should predefined the resolution of the stitched video. Typically, to ensure all content not lost, the predefined resolution would be pretty large, which would produce extensive invalid regions (black regions) and decrease the warping speed.

Finally, we will release the training code of the extended version (StabStitch++) within this October.

nie-lang commented 2 weeks ago

The complete code of StabStitch++ (an extension of StabStitch) is released, including the codes for training, inference, and multi-video stitching.