remove stray "generic_model_args" references

Summary:
generic_model_args no longer exists. Update some references to it, mostly in doc.

This fixes the testing of all the yaml files in test_forward pass.

Reviewed By: shapovalov

Differential Revision: D38789202

fbshipit-source-id: f11417efe772d7f86368b3598aa66c52b1309dbf
This commit is contained in:
Jeremy Reizenstein 2022-08-18 07:18:55 -07:00 committed by Facebook GitHub Bot
parent d42e0d3d86
commit fdaaa299a7
4 changed files with 44 additions and 28 deletions

View File

@ -67,7 +67,9 @@ To run training, pass a yaml config file, followed by a list of overridden argum
For example, to train NeRF on the first skateboard sequence from CO3D dataset, you can run: For example, to train NeRF on the first skateboard sequence from CO3D dataset, you can run:
```shell ```shell
dataset_args=data_source_args.dataset_map_provider_JsonIndexDatasetMapProvider_args dataset_args=data_source_args.dataset_map_provider_JsonIndexDatasetMapProvider_args
pytorch3d_implicitron_runner --config-path ./configs/ --config-name repro_singleseq_nerf $dataset_args.dataset_root=<DATASET_ROOT> $dataset_args.category='skateboard' $dataset_args.test_restrict_sequence_id=0 test_when_finished=True exp_dir=<CHECKPOINT_DIR> pytorch3d_implicitron_runner --config-path ./configs/ --config-name repro_singleseq_nerf \
$dataset_args.dataset_root=<DATASET_ROOT> $dataset_args.category='skateboard' \
$dataset_args.test_restrict_sequence_id=0 test_when_finished=True exp_dir=<CHECKPOINT_DIR>
``` ```
Here, `--config-path` points to the config path relative to `pytorch3d_implicitron_runner` location; Here, `--config-path` points to the config path relative to `pytorch3d_implicitron_runner` location;
@ -86,7 +88,9 @@ To run evaluation on the latest checkpoint after (or during) training, simply ad
E.g. for executing the evaluation on the NeRF skateboard sequence, you can run: E.g. for executing the evaluation on the NeRF skateboard sequence, you can run:
```shell ```shell
dataset_args=data_source_args.dataset_map_provider_JsonIndexDatasetMapProvider_args dataset_args=data_source_args.dataset_map_provider_JsonIndexDatasetMapProvider_args
pytorch3d_implicitron_runner --config-path ./configs/ --config-name repro_singleseq_nerf $dataset_args.dataset_root=<CO3D_DATASET_ROOT> $dataset_args.category='skateboard' $dataset_args.test_restrict_sequence_id=0 exp_dir=<CHECKPOINT_DIR> eval_only=True pytorch3d_implicitron_runner --config-path ./configs/ --config-name repro_singleseq_nerf \
$dataset_args.dataset_root=<CO3D_DATASET_ROOT> $dataset_args.category='skateboard' \
$dataset_args.test_restrict_sequence_id=0 exp_dir=<CHECKPOINT_DIR> eval_only=True
``` ```
Evaluation prints the metrics to `stdout` and dumps them to a json file in `exp_dir`. Evaluation prints the metrics to `stdout` and dumps them to a json file in `exp_dir`.
@ -101,7 +105,8 @@ conda install ffmpeg
Here is an example of calling the script: Here is an example of calling the script:
```shell ```shell
projects/implicitron_trainer/visualize_reconstruction.py exp_dir=<CHECKPOINT_DIR> visdom_show_preds=True n_eval_cameras=40 render_size="[64,64]" video_size="[256,256]" projects/implicitron_trainer/visualize_reconstruction.py exp_dir=<CHECKPOINT_DIR> \
visdom_show_preds=True n_eval_cameras=40 render_size="[64,64]" video_size="[256,256]"
``` ```
The argument `n_eval_cameras` sets the number of renderring viewpoints sampled on a trajectory, which defaults to a circular fly-around; The argument `n_eval_cameras` sets the number of renderring viewpoints sampled on a trajectory, which defaults to a circular fly-around;
@ -124,18 +129,21 @@ In the config, inner parameters can be propagated using `_args` postfix, e.g. to
The root of the hierarchy is defined by `ExperimentConfig` dataclass. The root of the hierarchy is defined by `ExperimentConfig` dataclass.
It has top-level fields like `eval_only` which was used above for running evaluation by adding a CLI override. It has top-level fields like `eval_only` which was used above for running evaluation by adding a CLI override.
Additionally, it has non-leaf nodes like `generic_model_args`, which dispatches the config parameters to `GenericModel`. Thus, changing the model parameters may be achieved in two ways: either by editing the config file, e.g. Additionally, it has non-leaf nodes like `model_factory_ImplicitronModelFactory_args.model_GenericModel_args`, which dispatches the config parameters to `GenericModel`.
Thus, changing the model parameters may be achieved in two ways: either by editing the config file, e.g.
```yaml ```yaml
generic_model_args: model_factory_ImplicitronModelFactory_args:
render_image_width: 800 model_GenericModel_args:
raysampler_args: render_image_width: 800
n_pts_per_ray_training: 128 raysampler_args:
n_pts_per_ray_training: 128
``` ```
or, equivalently, by adding the following to `pytorch3d_implicitron_runner` arguments: or, equivalently, by adding the following to `pytorch3d_implicitron_runner` arguments:
```shell ```shell
generic_model_args.render_image_width=800 generic_model_args.raysampler_args.n_pts_per_ray_training=128 model_args=model_factory_ImplicitronModelFactory_args.model_GenericModel_args
$model_args.render_image_width=800 $model_args.raysampler_args.n_pts_per_ray_training=128
``` ```
See the documentation in `pytorch3d/implicitron/tools/config.py` for more details. See the documentation in `pytorch3d/implicitron/tools/config.py` for more details.
@ -149,11 +157,12 @@ This means that other Configurables can refer to them using the base type, while
In that case, `_args` node name has to include the implementation type. In that case, `_args` node name has to include the implementation type.
More specifically, to change renderer settings, the config will look like this: More specifically, to change renderer settings, the config will look like this:
```yaml ```yaml
generic_model_args: model_factory_ImplicitronModelFactory_args:
renderer_class_type: LSTMRenderer model_GenericModel_args:
renderer_LSTMRenderer_args: renderer_class_type: LSTMRenderer
num_raymarch_steps: 10 renderer_LSTMRenderer_args:
hidden_size: 16 num_raymarch_steps: 10
hidden_size: 16
``` ```
See the documentation in `pytorch3d/implicitron/tools/config.py` for more details on the configuration system. See the documentation in `pytorch3d/implicitron/tools/config.py` for more details on the configuration system.
@ -188,15 +197,17 @@ class XRayRenderer(BaseRenderer, torch.nn.Module):
``` ```
Please note `@registry.register` decorator that registers the plug-in as an implementation of `Renderer`. Please note `@registry.register` decorator that registers the plug-in as an implementation of `Renderer`.
IMPORTANT: In order for it to run, the class (or its enclosing module) has to be imported in your launch script. Additionally, this has to be done before parsing the root configuration class `ExperimentConfig`. IMPORTANT: In order for it to run, the class (or its enclosing module) has to be imported in your launch script.
Additionally, this has to be done before parsing the root configuration class `ExperimentConfig`.
Simply add `import .x_ray_renderer` in the beginning of `experiment.py`. Simply add `import .x_ray_renderer` in the beginning of `experiment.py`.
After that, you should be able to change the config with: After that, you should be able to change the config with:
```yaml ```yaml
generic_model_args: model_factory_ImplicitronModelFactory_args:
renderer_class_type: XRayRenderer model_GenericModel_args:
renderer_XRayRenderer_args: renderer_class_type: XRayRenderer
n_pts_per_ray: 128 renderer_XRayRenderer_args:
n_pts_per_ray: 128
``` ```
to replace the implementation and potentially override the parameters. to replace the implementation and potentially override the parameters.
@ -252,7 +263,8 @@ model_GenericModel_args: GenericModel
╘== ReductionFeatureAggregator ╘== ReductionFeatureAggregator
``` ```
Please look at the annotations of the respective classes or functions for the lists of hyperparameters. `tests/experiment.yaml` shows every possible option if you have no user-defined classes. Please look at the annotations of the respective classes or functions for the lists of hyperparameters.
`tests/experiment.yaml` shows every possible option if you have no user-defined classes.
# Reproducing CO3D experiments # Reproducing CO3D experiments

View File

@ -333,8 +333,10 @@ def export_scenes(
) )
dataset_args.test_on_train = False dataset_args.test_on_train = False
# Set the rendering image size # Set the rendering image size
config.generic_model_args.render_image_width = render_size[0] model_factory_args = config.model_factory_ImplicitronModelFactory_args
config.generic_model_args.render_image_height = render_size[1] model_args = model_factory_args.model_GenericModel_args
model_args.render_image_width = render_size[0]
model_args.render_image_height = render_size[1]
if restrict_sequence_name is not None: if restrict_sequence_name is not None:
dataset_args.restrict_sequence_name = restrict_sequence_name dataset_args.restrict_sequence_name = restrict_sequence_name

View File

@ -131,8 +131,9 @@ class GenericModel(ImplicitronModelBase): # pyre-ignore: 13
for more details on how to create and register a custom component. for more details on how to create and register a custom component.
In the config .yaml files for experiments, the parameters below are In the config .yaml files for experiments, the parameters below are
contained in the `generic_model_args` node. As GenericModel contained in the
derives from Configurable, the input arguments are `model_factory_ImplicitronModelFactory_args.model_GenericModel_args`
node. As GenericModel derives from ReplaceableBase, the input arguments are
parsed by the run_auto_creation function to initialize the parsed by the run_auto_creation function to initialize the
necessary member modules. Please see implicitron_trainer/README.md necessary member modules. Please see implicitron_trainer/README.md
for more details on this process. for more details on this process.

View File

@ -207,10 +207,11 @@ def _load_model_config_from_yaml(config_path, strict=True) -> DictConfig:
def _load_model_config_from_yaml_rec(cfg: DictConfig, config_path: str) -> DictConfig: def _load_model_config_from_yaml_rec(cfg: DictConfig, config_path: str) -> DictConfig:
cfg_loaded = OmegaConf.load(config_path) cfg_loaded = OmegaConf.load(config_path)
if "generic_model_args" in cfg_loaded: cfg_model_loaded = None
cfg_model_loaded = cfg_loaded.generic_model_args if "model_factory_ImplicitronModelFactory_args" in cfg_loaded:
else: factory_args = cfg_loaded.model_factory_ImplicitronModelFactory_args
cfg_model_loaded = None if "model_GenericModel_args" in factory_args:
cfg_model_loaded = factory_args.model_GenericModel_args
defaults = cfg_loaded.pop("defaults", None) defaults = cfg_loaded.pop("defaults", None)
if defaults is not None: if defaults is not None:
for default_name in defaults: for default_name in defaults: