alex-xun-xu / LearnSubspaceMoSeg

5 stars 0 forks source link

No KT3DMoSeg dataset? #1

Open SJingjia opened 2 years ago

SJingjia commented 2 years ago

hi, there is nothing in the script “prepareDataset_KT3DMoSeg”, can you provide the dataset download link? thank you!!

dtcheng0829 commented 2 years ago

Hello, I also do research in related fields. Could you run through this code?

alex-xun-xu commented 2 years ago

Sorry for the late reply. I will try to update the data preprocessing part by this week.

dtcheng0829 commented 2 years ago

Sorry for the late reply. I will try to update the data preprocessing part by this week. 感谢您的回复,关于您的代码,我确实遇到一些问题,比如在load数据时的索引,5帧的处理等。希望徐教授能提供得详细一点,我将不胜感激。

dtcheng0829 commented 2 years ago

Hello, I wrote a version of your code with totch. The network output is Nan. After debugging, I found that there is a problem with the network. After I reduced many standardization layers, it can run. I don't know if you have encountered this problem.

alex-xun-xu commented 2 years ago

I find PointNet can also work well on this problem. Maybe you can try swapping with PointNet as backbone. The only change is the input dimension.

dtcheng0829 commented 2 years ago

Thank you very much for your reply. I'll do it

alex-xun-xu commented 2 years ago

@dtcheng0829 Please see the updated Readme. You can now access the processed clips.

dtcheng0829 commented 2 years ago

Thank you very much for replying to me in your busy schedule. I have a few questions here. Firstly, i did some experiments under your code, using adelaidermf and kt3dmoseg, and found that the effect is very poor. I don't know whether you loaded the pre training model. Of course, I will also carefully check my code. Secondly,Will the effect of such a multi-layer network not be affected for such a small data set ? Finally, I'm very sorry to bother you.

alex-xun-xu commented 2 years ago

Can you try the newly released clips (in *.mat formats)? Please let me know if the performance is still poor. The performance will not be affected the depth of network too much. You can just use the default parameters.

dtcheng0829 commented 2 years ago

OK, thank you very much for your comments. I will experiment on thr newly kt3dmoseg dataset and tell you the performance. My bad performance is mainly on the Adelaide RMF dataset. This problem has bothered me for a long time. Because the label problem of adelaidermf itself is different from the normal semantic segmentation, I used your clustering method after the network output. And run under multiple networks. When i carry out a leave-one-out cross-validation, the loss of several validation samples will increase all the time. Of course, the perfoamance is not very good.

dtcheng0829 commented 2 years ago

Hello, for Adelaide RMF dataset. Have you processed the data before entering the network?

dtcheng0829 commented 2 years ago

Hello, how do you handle different N points input to the network?

alex-xun-xu commented 2 years ago

Hello, how do you handle different N points input to the network?

This network does not restrict the number of input points. You can try with different number of points.

dtcheng0829 commented 2 years ago

image I would like to know if both your VANILLA and AUGMENT modes were run in the dataset you sent me last time, if so. Does AUGMENT mode run all mat files? Does VANILLA mode run all the SeqXXX_ClipXX_r-1_v-5.mat files?

dtcheng0829 commented 2 years ago

Hi, for the FBMS59 dataset, how did you convert the RGB image to points as input?

dtcheng0829 commented 2 years ago

I'm very sorry to bother you again. First, Thank you very much for the KT3DMoSeg dataset you provided last time. I have finished experiments on it. The performance is very good. Seoncd, now I am doing FBMS dataset, and I want to ask some problems. For FBMS data sets, is the point trajectory data (input to the network) used by the result of the paper “Segmentation of Moving Objects by Long Term Video Analysis” ? which is realized by C ++ code. if so, I want to ask whether the subsample pixel is taken 4 or 8? Looking forward to your reply, I will be grateful.

dtcheng0829 commented 2 years ago

Also, we will cite your articles.