princeton-vl / RAFT-Stereo

MIT License
721 stars 139 forks source link

About finetuning processes. #73

Open wwsource opened 1 year ago

wwsource commented 1 year ago

Nice work! By the way, it seems that there is no description of batch size number during finetuning on KITTI and ETH3D in the paper. And it also does not specify the learning rate during KITTI, ETH3D and Middlebury. Could you provide relevant detailed data, or training commands or scripts, thank you very much.