Closed pupiljia closed 1 month ago
We used openlrm 1.0 instead of 1.1. You can checkout to their v1.0 branch and then will find the same model structure: https://github.com/3DTopia/OpenLRM/tree/v1.0.0
Thank you.But there is still a small different in transformer model's norm layer while the OpenLRM use the ModLN which have an additional mlp @bluestyle97
I notice that Instantnerf reuse the pre-trained OpenLRM。But while I check the OpenLRM model, I find that there are some different in model structure。 This one is most closed to Instantnerf config. But the resolution is different and it use dinov2 as the encoder model. I want to know: If you just load the weights from this github and model? If the different cause some bad effect?