WU-CVGL / BAD-Gaussians

[ECCV 2024] "BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting". ⚡Train a scene from real-world blurry images in minutes!
https://lingzhezhao.github.io/BAD-Gaussians/
Apache License 2.0
188 stars 7 forks source link

Pose visualization problem. #23

Open chenkang455 opened 1 month ago

chenkang455 commented 1 month ago

Hello @LingzheZhao, I have a question about pose visualization in BAD-GS. I'm curious about the origin of the ground truth (gt) pose. If it's derived from the Blender software, specifically using the blender-nerf add-on, could you explain how you managed to reconcile the scale differences between the gt pose and the colmap pose?

image

I've attached a visual comparison that I conducted, which suggests that the gt pose doesn't seem to align with the colmap pose. Could you shed some light on this discrepancy?

image

LingzheZhao commented 1 month ago

Hi @chenkang455, I haven't played with the blender-nerf add-on, instead we used some tiny scripts to extract the GT poses from blender.

Here is a tiny script to extract camera poses from blender to a TUM-formatted file, that should be executed in blender's built-in script editor:

import bpy
from pathlib import Path

# Edit here to change the parent of the output directory
WORK_DIR = Path("/home/YOURNAME/WORKDIR")
# Edit here to change the output directory
SEQUENCE = "tanabata"
# Edit here to change the target camera.
# Note that Trolley (i.e. wine) is rendered with `Camera.001` in the `tanabata.blend` scene.
CAMERA = "Camera"

OUTPUT_DIR = WORK_DIR / SEQUENCE
OUTPUT_FILE = OUTPUT_DIR / "groundtruth_full.txt"
OUTPUT_FILE.parent.mkdir(exist_ok=True)

scene = bpy.context.scene
cam = scene.objects.get(CAMERA, None)

def stringify_tum_pose(i, translation, quaternion):
    t = translation
    q = quaternion
    return f"{i} {t.x} {t.y} {t.z} {q.x} {q.y} {q.z} {q.w}\n"

with open (OUTPUT_FILE, "w") as f:
    f.write("#timestamp tx ty tz qx qy qz qw\n")
    for framei in range(scene.frame_end):
        scene.frame_set(framei)
        f.write(stringify_tum_pose(
            framei,
            cam.matrix_world.translation,
            cam.matrix_world.to_quaternion(),
        ))

Then we drop the poses of the novel-view images with another simple script, as we only evaluate the poses of the training images:

import io
import pandas as pd

def read_and_filter_tum_file(file_path):
    # Read the file, skipping comment lines
    with open(file_path, 'r') as file:
        lines = file.readlines()

    # Filter out comment lines
    data_lines = [line for line in lines if not line.startswith('#')]

    # Load the data into a DataFrame
    columns = ['timestamp', 'tx', 'ty', 'tz', 'qx', 'qy', 'qz', 'qw']
    data = pd.read_csv(io.StringIO(''.join(data_lines)), sep=' ', names=columns)

    # Drop every 8th pose (0, 8, 16, ...)
    filtered_data = data[data['timestamp'] % 8 != 0]

    # Reassign timestamps starting from 0
    filtered_data['timestamp'] = range(len(filtered_data))

    return filtered_data

# Usage
file_path = 'groundtruth_full.txt'
filtered_trajectory = read_and_filter_tum_file(file_path)

# Display or save the filtered data
# print(filtered_trajectory)
filtered_trajectory.to_csv('groundtruth.txt', sep=' ', index=False, header=False)

Sorry for not maintaining this repo for a while, as I was too busy with other projects. I plan to reorganize the project and elaborate on these details in a few days.

Some other scripts we use in blender are in this repo, but it depends on a private repo, so I will refactor it later to remove that dependency.

chenkang455 commented 1 month ago

Hi, @LingzheZhao thanks for your help! The instruction is detailed and I will try it 😊