Closed orperel closed 1 year ago
Running latent_nerf demo with the provided config file, does not seem to work. The parsing error:
Invalid configuration choice for "grid", expected to be any of the following:
(1) OctreeGrid
(2) HashGrid.from-geometric
If using a YAML, set a "grid.constructor" entry to one of the choices above.
If using the CLI, add a subcommand "grid:[CHOICE]" to one of the choices above.
while the .yaml is:
# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION & AFFILIATES and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION & AFFILIATES is strictly prohibited.
device: 'cuda' # Device used to run the optimization
log_level: 20 # Log level: logging.INFO
blas: # Occupancy structure (blas: bottom level acceleration structure, for fast ray tracing)
constructor: 'OctreeAS.make_dense'
level: 8
grid: # Feature grid, the backbone of the neural field
constructor: 'OctreeGrid'
feature_dim: 5
num_lods: 4
interpolation_type: 'linear'
multiscale_type: 'sum'
feature_std: 0.01
feature_bias: 0.0
nef: # Neural field: combines grid, decoders and positional embedders
constructor: 'FunnyNeuralField'
tracer: # A tracer for intersecting rays with the blas, generating samples along rays for the neural field & integrating
constructor: 'PackedRFTracer'
bg_color: [ 0.0, 0.0, 0.0 ]
num_steps: 16
raymarch_type: 'voxel'
step_size: 1.0
dataset: # Train & validation dataset
constructor: 'NeRFSyntheticDataset'
dataset_path: 'data/'
split: 'train' # Which subset of the dataset to use for training
bg_color: [ 0.0, 0.0, 0.0 ] # The background color to use for when alpha=0.
mip: 0 # If provided, will rescale images by 2**mip. Useful when large images are loaded.
dataset_num_workers: -1 # The number of workers to spawn for multiprocessed loading.
# If dataset_num_workers < 1, processing will take place on the main process.
dataset_transform:
constructor: 'SampleRays'
num_samples: 4096
trainer:
# Base Trainer config
exp_name: "siggraph_2022_demo" # Name of the experiment: a unique id to use for logging, model names, etc.
mode: 'train' # Choices: 'train', 'validate'
max_epochs: 100 # Number of epochs to run the training.
save_every: 100 # Saves the optimized model every N epochs
save_as_new: False # If True, will save the model as a new file every time the model is saved
model_format: 'full' # Format to save the model: 'full' (weights+model) or 'state_dict'
render_every: 100 # Renders an image of the neural field every N epochs
valid_every: 100 # Runs validation every N epochs
enable_amp: True # If enabled, the step() training function will use mixed precision.
profile_nvtx: True # If enabled, nvtx markers will be emitted by torch for profiling.
grid_lr_weight: 100.0 # Learning rate weighting applied only for the grid parameters (contain "grid" in their name)
# MultiviewTrainer config
prune_every: 100 # Invokes nef.prune() logic every "prune_every" iterations.
random_lod: False # If True, random LODs in the occupancy structure will be picked per step, during tracing.
# If False, will always used the highest level of the occupancy structure.
rgb_lambda: 1.0 # Loss weight for rgb_loss component.
dataloader:
batch_size: 1 # Number of images used per batch, for which dataset_transform.num_samples rays will be used
num_workers: 0 # Number of cpu workers used to fetch data, always 0 here as data is already processed.
optimizer: # torch optimizer to use, and its args
constructor: 'RMSprop'
lr: 0.001
eps: 1.0e-8
weight_decay: 0 # Applies to decoder params only
tracker:
log_dir: '_results/logs/runs' # For logs and snapshots
enable_tensorboard: True
enable_wandb: False
tensorboard:
log_dir: '_results/logs/runs' # For TensorBoard summary
exp_name: null # Only set this if you want to set an experiment set name specifically for TB
log_fname: null # Only set this if you want to set a unique ID specifically for TB
wandb: # active when enable_wandb=True
project: 'wisp-demo'
entity: null # i.e. your wandb username here
job_type: 'train'
visualizer: # Offline renderer used during validation to save snapshot of the neural field
render_res: [ 1024, 1024 ]
render_batch: 10000
vis_camera: # See: wisp.trainers.tracker.tracker.ConfigVisCameras
camera_origin: [ -3.0, 0.65, -3.0 ]
camera_lookat: [ 0.0, 0.0, 0.0 ]
camera_fov: 30
camera_clamp: [ 0.0, 10.0 ]
viz360_num_angles: 20 # Set to 0 to disable the 360 animation.
viz360_radius: 3.0
viz360_render_all_lods: False
Never mind, it was tyro version probably. Sorry
The current wisp apps rely on
argparse
for configuration, which introduces the following problems:extra_args
which are not properly tracked.We've drafted a new config system instead, relying on a combination of tyro and hydra-zen.
The proposed config system should have minimal impact on the way library components are accessed (that is, signatures of existing functions should remain the same, except for required cleanups and typing annotations added). Trainers are expected to be heavily revised by this change.
An example usage of the various methods is as follows:
Running the program above and specifying args from CLI:
or with a config file: