Open AiYoWeiYL opened 2 months ago
demo_interpolate:
model_state:
enable_sh: True
log_query: False
dataset:
module: LoG.dataset.demo.InterpolatePath
args:
cameras: $PLYNAME
scale: 4
steps: 600
subs:
- 0001
- 0040
- 0080
- 0180
The subs
means the image name of the key cameras. You should given at least 4 images to interpolate a path.
Thank you for your reply. I wrote four cameras in the configuration file
demo_interpolate:
model_state:
enable_sh: True
log_query: False
dataset:
module: LoG.dataset.demo.InterpolatePath
args:
cameras: $PLYNAME
scale: 4
steps: 600
subs:
- 0001
- 0040
- 0080
- 0180
but when I output the content of subs in the code, I get the following result [1, 32, '0080', '0180']
,This resulted in less than 4 available camera data in the code.
So I delete the subs node in the configuration file, and use all cameras as subs by default. The render result is obtained, but the result is not good. Is there any problem with which parameter is set? Below is the content of the configuration file train.yml
parents:
- config/npu_data/dataset.yml
- config/npu_data/level_of_gaussian.yml
- config/npu_data/stage_8_4.yml
exp: output/npu_data/log
gpus: [0]
log_interval: 1000
save_interval: 10_000
max_steps: 750
RGB_RENDER_L1_SSIM:
module: LoG.render.renderer.NaiveRendererAndLoss
args:
use_origin_render: False
use_randback: True
train:
dataset: $dataset
render: $RGB_RENDER_L1_SSIM
stages: $NAIVE_STAGE
init:
method: scale_min
dataset_state:
scale: 4
Hello, in the grammar of yaml, you should use ""
to represent the string.
demo_interpolate:
model_state:
enable_sh: True
log_query: False
dataset:
module: LoG.dataset.demo.InterpolatePath
args:
cameras: $PLYNAME
scale: 4
steps: 600
subs:
- "0001"
- "0040"
- "0080"
- "0180"
Thank you for your reply. Now I can get the render result, but the render result is not very good, as shown in the image below. What might be the cause of this? Your advice would be appreciated. Thanks!
Thank you for the update. The render result does look unusual. Typically, our method should achieve a better fit at least in the training view. Please make sure that your camera calibration is accurate, as it significantly impacts the results.
Thank you for your reply.
I think the intri.yml obtained by colmap calibration are OK.
Is it possible that the number of points in the sparse point cloud obtained by colmap is too small, resulting in poor rendering results? I used 180 images with 1920*1080 resolution for colmap, and obtained 60,000 sparse point cloud points.
Or is it possible that the use_origin_render
parameter result to the problem, I set this parameter to false
otherwise an error will be reported.The error basically means that the unknown parameter use_filter
is passed, which is attributed to the following code
if not self.use_origin_render and not model.training:
name_args['use_filter'] = False
ret = rasterizer(**name_args)
Hello, you must set use_origin_render
to False
. You can comment the following line to make it work:
# if not self.use_origin_render and not model.training:
# name_args['use_filter'] = False
The proper way is to re-install mydiffgaussian
:
cd submodules
# clone the modified gs
git clone https://github.com/chingswy/diff-gaussian-rasterization.git mydiffgaussian --recursive
cd mydiffgaussian
git checkout antialias
# or just pull it from github
git pull origin antialias
pip install . -v
cd ..
Hello, I re-git and install mydiffgaussian module as you said, and set use_origin_render to False, but the rendering result is not improved. So I tried to reproduce the rendering results of a small data set you provided to see if there was something wrong with my steps.
The small data set you provided includes two resolutions of 768*512
and 1536*1024
. I tested the images separately with both resolutions and got different results. As shown in the following picture
Does this mean that the more points in the initial point cloud (and the higher the image resolution), the better the rendering? The 1536*1024 set of images produced an initial point cloud count of 150,000 by colmap.
The above two sets of data are rendered using the same steps and Configuration files, except that the path of the image is different. Here are all the configuration file contents.
parents:
- config/test2/dataset.yml
- config/test2/level_of_gaussian.yml
- config/test2/stage_8_4.yml
exp: output/example/test2/log gpus: [0]
log_interval: 1000 save_interval: 10_000
max_steps: 750
RGB_RENDER_L1_SSIM: module: LoG.render.renderer.NaiveRendererAndLoss args: use_origin_render: False use_randback: True
train: dataset: $dataset render: $RGB_RENDER_L1_SSIM stages: $NAIVE_STAGE init: method: scale_min dataset_state: scale: 4
val: dataset: $val_dataset iteration: 10000
2. stage_8_4.yml
num_workers: &num_workers 4
NAIVE_STAGE: init: loader: module: iteration args: batch_size: 1 iterations: 150 num_workers: num_workers dataset_state: scale: 8 model_state: {} tree: loader: module: iteration args: batch_size: 1 iterations: 550 num_workers: num_workers dataset_state: scale: 4 render_state: render_depth: False model_state: enable_sh: True
3. level_of_gaussian.yml
max_steps: 600
model: module: LoG.model.level_of_gaussian.LoG args: use_view_correction: True gaussian: xyz_scale: $xyz_scale sh_degree: 1 init_ply: scale3d: $scale3d filename: $PLYNAME init_opacity: 0.1 optimizer: optimize_keys: [xyz, colors, scaling, opacity, rotation, shs] opt_all_levels: True # optimize all level or not lr_dict: xyz: 0.00016 xyz_final: 0.0000016 xyz_scale: $xyz_scale colors: 0.0025 shs: 0.000125 scaling: 0.005 opacity: 0.05 # lr_opacity > lr_scaling rotation: 0.001 max_steps: $max_steps # 30_000 tree: max_child: 4 max_level: 30 densify_and_remove:
upgrade_sh_iter: 10
densify_from_iter: 1
densify_every_iter: 1
upgrade_repeat: 50
# init
init_split_method: split_by_2d
init_radius_min: 4
init_radius_split: 16
init_weight_min: 0.1
min_steps: 50
# densify
method: naive
split_grad_thres: 0.0002
radius2d_thres: 6
remove_weights_thres: 0.005
max_split_points: 20000
sort_method: radii
min_steps_split: 100
#
scaling_decay: 0.9
4. dataset.yml
root: data/test2 PLYNAME: data/test2/sparse/0/sparse.npz scale3d: 1. xyz_scale: 1.
dataset: module: LoG.dataset.colmap.ImageDataset args: root: $root pre_undis: True share_camera: False scales: [1, 2, 4, 8] crop_size: [-1, -1] znear: 0.001 zfar: 100. scale3d: $scale3d ext: .JPG
val_dataset: module: LoG.dataset.colmap.ImageDataset args: root: $root namelist:
demo_interpolate: model_state: enable_sh: True log_query: False render_state: background: [1., 1., 1.] dataset: module: LoG.dataset.demo.InterpolatePath args: cameras: $PLYNAME scale: 2 steps: 300 subs:
demo_level: model_state: enable_sh: True log_query: False dataset: module: LoG.dataset.demo.ShowLevel args: cameras: $PLYNAME steps: 10 sub: y/8y01073 scale: 4
demo_pixel: model_state: enable_sh: True log_query: True dataset: module: LoG.dataset.demo.ShowLevel args: mode: pixel cameras: $PLYNAME steps: 300 sub: y/8y01073 scale: 4
demo_lod: model_state: enable_sh: True log_query: False dataset: module: LoG.dataset.demo.ZoomInOut args: cameras: $PLYNAME sub: y/8y01073 zranges: [-20., 1.] scale: 2 use_logspace: False
Hello, the default configuration uses a training resolution that is downsampled by a scale of 8 or 4 because the input images we provide from drone captures are high resolution. If your original image resolution is not high, you should modify the scale in the corresponding dataset.yml
and stage_8_4.yml
. For example, change the init stage to use scale 2 and the tree stage to use scale 1.
Thank you for your reply. The result looks much better after modifying the scale!
Hello,
I've got the training results of my own data, and when I do the interpolation visualization, I run into the following problem
The corresponding configuration file of interpolation visualization should be this part of the dataset.yml
I don't understand sub, subs what do these nodes mean. Your advice would be appreciated. Thanks!