Closed wrencanfly closed 3 weeks ago
Since the resolution for SAM's GT feature map is very low, you can comment out this line to get the same resolution feature map as your rendered image size: https://github.com/ShijieZhou-UCLA/feature-3dgs/blob/6c570d6e7e4375129e57a588a206e6f629be2ff1/render.py#L137
I followed the instructions and tried using the teatime dataset.
what I did:
using SAM: python export_image_embeddings.py --checkpoint checkpoints/sam_vit_h_4b8939.pth --model-type vit_h --input ../../data/DATASET_NAME/images --output ../../data/OUTPUT_NAME/sam_embeddings
train: python train.py -s data/DATASET_NAME -m output/OUTPUT_NAME -f sam --speedup --iterations 7000
render: python render.py -s data/DATASET_NAME -m output/OUTPUT_NAME --iteration 7000