Summary:
Introduces the OverfitModel for NeRF-style training with overfitting to one scene.
It is a specific case of GenericModel. It has been disentangle to ease usage.
## General modification
1. Modularize a minimum GenericModel to introduce OverfitModel
2. Introduce OverfitModel and ensure through unit testing that it behaves like GenericModel.
## Modularization
The following methods have been extracted from GenericModel to allow modularity with ManyViewModel:
- get_objective is now a call to weighted_sum_losses
- log_loss_weights
- prepare_inputs
The generic methods have been moved to an utils.py file.
Simplify the code to introduce OverfitModel.
Private methods like chunk_generator are now public and can now be used by ManyViewModel.
Reviewed By: shapovalov
Differential Revision: D43771992
fbshipit-source-id: 6102aeb21c7fdd56aa2ff9cd1dd23fd9fbf26315
Summary: If a configurable class inherits torch.nn.Module and is instantiated, automatically call `torch.nn.Module.__init__` on it before doing anything else.
Reviewed By: shapovalov
Differential Revision: D42760349
fbshipit-source-id: 409894911a4252b7987e1fd218ee9ecefbec8e62
Summary: We don’t see much value in reporting metrics by camera difficulty while supporting that in new datasets is quite painful, hence deprecating training cameras in the data API and ignoring in evaluation.
Reviewed By: bottler
Differential Revision: D42678879
fbshipit-source-id: aad511f6cb2ca82745f31c19594e1d80594b61d7
Summary:
Addresses the following issue:
https://github.com/facebookresearch/pytorch3d/issues/1345#issuecomment-1272881244
I.e., when installed from conda, `pytorch3d_implicitron_visualizer` crashes since it invokes `main()` while `main` requires a single positional arg `argv`.
Reviewed By: shapovalov
Differential Revision: D41533497
fbshipit-source-id: e53a923eb8b2f0f9c0e92e9c0866d9cb310c4799
Summary:
Enum fields cause the following to crash since they are loaded as strings:
```
config = OmegaConf.load(autodumped_cfg_file)
Experiment(**config)
```
It would be good to come up with the general solution but for now just fixing the visualisation script.
Reviewed By: bottler
Differential Revision: D41140426
fbshipit-source-id: 71c1c6b1fffe3b5ab1ca0114cfa3f0d81160278f
Summary:
Allow a module's param_group member to specify overrides to the param groups of its members or their members.
Also logging for param group assignments.
This allows defining `params.basis_matrix` in the param_groups of a voxel_grid.
Reviewed By: shapovalov
Differential Revision: D41080667
fbshipit-source-id: 49f3b0e5b36e496f78701db0699cbb8a7e20c51e
Summary:
Allows loading of multiple categories.
Multiple categories are provided in a comma-separated list of category names.
Reviewed By: bottler, shapovalov
Differential Revision: D40803297
fbshipit-source-id: 863938be3aa6ffefe9e563aede4a2e9e66aeeaa8
Summary: Add option to flat pad the last delta. Might to help when training on rgb only.
Reviewed By: shapovalov
Differential Revision: D40587475
fbshipit-source-id: c763fa38948600ea532c730538dc4ff29d2c3e0a
Summary: Make Implicitron run without visdom installed.
Reviewed By: shapovalov
Differential Revision: D40587974
fbshipit-source-id: dc319596c7a4d10a4c54c556dabc89ad9d25c2fb
Summary:
Adds the ability to have different learning rates for different parts of the model. The trainable parts of the implicitron have a new member
param_groups: dictionary where keys are names of individual parameters,
or module’s members and values are the parameter group where the
parameter/member will be sorted to. "self" key is used to denote the
parameter group at the module level. Possible keys, including the "self" key
do not have to be defined. By default all parameters are put into "default"
parameter group and have the learning rate defined in the optimizer,
it can be overriden at the:
- module level with “self” key, all the parameters and child
module s parameters will be put to that parameter group
- member level, which is the same as if the `param_groups` in that
member has key=“self” and value equal to that parameter group.
This is useful if members do not have `param_groups`, for
example torch.nn.Linear.
- parameter level, parameter with the same name as the key
will be put to that parameter group.
And in the optimizer factory, parameters and their learning rates are recursively gathered.
Reviewed By: shapovalov
Differential Revision: D40145802
fbshipit-source-id: 631c02b8d79ee1c0eb4c31e6e42dbd3d2882078a
Summary: Loads the whole dataset and moves it to the device and sends it to for sampling to enable full dataset heterogeneous raysampling.
Reviewed By: bottler
Differential Revision: D39263009
fbshipit-source-id: c527537dfc5f50116849656c9e171e868f6845b1
Summary:
Changed ray_sampler and metrics to be able to use mixed frame raysampling.
Ray_sampler now has a new member which it passes to the pytorch3d raysampler.
If the raybundle is heterogeneous metrics now samples images by padding xys first. This reduces memory consumption.
Reviewed By: bottler, kjchalup
Differential Revision: D39542221
fbshipit-source-id: a6fec23838d3049ae5c2fd2e1f641c46c7c927e3
Summary: Allow using the new `foreach` option on optimizers.
Reviewed By: shapovalov
Differential Revision: D39694843
fbshipit-source-id: 97109c245b669bc6edff0f246893f95b7ae71f90
Summary: Various fixes to get visualize_reconstruction running, and an interactive test for it.
Reviewed By: kjchalup
Differential Revision: D39286691
fbshipit-source-id: 88735034cc01736b24735bcb024577e6ab7ed336
Summary: Workaround for oddity with new hydra.
Reviewed By: davnov134
Differential Revision: D39280639
fbshipit-source-id: 76e91947f633589945446db93cf2dbc259642f8a
Summary:
Move the flyaround rendering function into core implicitron.
The unblocks an example in the facebookresearch/co3d repo.
Reviewed By: bottler
Differential Revision: D39257801
fbshipit-source-id: 6841a88a43d4aa364dd86ba83ca2d4c3cf0435a4
Summary:
Adds yaml configs to train selected methods on CO3Dv2.
Few more updates:
1) moved some fields to base classes so that we can check is_multisequence in experiment.py
2) skip loading all train cameras for multisequence datasets, without this, co3d-fewview is untrainable
3) fix bug in json index dataset provider v2
Reviewed By: kjchalup
Differential Revision: D38952755
fbshipit-source-id: 3edac6fc8e20775aa70400bd73a0e6d52b091e0c
Summary:
generic_model_args no longer exists. Update some references to it, mostly in doc.
This fixes the testing of all the yaml files in test_forward pass.
Reviewed By: shapovalov
Differential Revision: D38789202
fbshipit-source-id: f11417efe772d7f86368b3598aa66c52b1309dbf
Summary: Linear followed by exponential LR progression. Needed for making Blender scenes converge.
Reviewed By: kjchalup
Differential Revision: D38557007
fbshipit-source-id: ad630dbc5b8fabcb33eeb5bdeed5e4f31360bac2
Summary:
LLFF (and most/all non-synth datasets) will have no background/foreground distinction. Add support for data with no fg mask.
Also, we had a bug in stats loading, like this:
* Load stats
* One of the stats has a history of length 0
* That's fine, e.g. maybe it's fg_error but the dataset has no notion of fg/bg. So leave it as len 0
* Check whether all the stats have the same history length as an arbitrarily chosen "reference-stat"
* Ooops the reference-stat happened to be the stat with length 0
* assert (legit_stat_len == reference_stat_len (=0)) ---> failed assert
Also some minor fixes (from Jeremy's other diff) to support LLFF
Reviewed By: davnov134
Differential Revision: D38475272
fbshipit-source-id: 5b35ac86d1d5239759f537621f41a3aa4eb3bd68
Summary: Don't copy from one part of config to another, rather do the copy within GenericModel.
Reviewed By: davnov134
Differential Revision: D38248828
fbshipit-source-id: ff8af985c37ea1f7df9e0aa0a45a58df34c3f893
Summary:
Stats are logically connected to the training loop, not to the model. Hence, moving to the training loop.
Also removing resume_epoch from OptimizerFactory in favor of a single place - ModelFactory. This removes the need for config consistency checks etc.
Reviewed By: kjchalup
Differential Revision: D38313475
fbshipit-source-id: a1d188a63e28459df381ff98ad8acdcdb14887b7
Summary: Before this diff, train_stats.py would not be created by default, EXCEPT when resuming training. This makes them appear from start.
Reviewed By: shapovalov
Differential Revision: D38320341
fbshipit-source-id: 8ea5b99ec81c377ae129f58e78dc2eaff94821ad
Summary: Remove the dataset's need to provide the task type.
Reviewed By: davnov134, kjchalup
Differential Revision: D38314000
fbshipit-source-id: 3805d885b5d4528abdc78c0da03247edb9abf3f7
Summary:
Added _NEED_CONTROL
to JsonIndexDatasetMapProviderV2 and made dataset_tweak_args use it.
Reviewed By: bottler
Differential Revision: D38313914
fbshipit-source-id: 529847571065dfba995b609a66737bd91e002cfe
Summary: Only import it if you ask for it.
Reviewed By: kjchalup
Differential Revision: D38327167
fbshipit-source-id: 3f05231f26eda582a63afc71b669996342b0c6f9
Summary: Currently, seeds are set only inside the train loop. But this does not ensure that the model weights are initialized the same way everywhere which makes all experiments irreproducible. This diff fixes it.
Reviewed By: bottler
Differential Revision: D38315840
fbshipit-source-id: 3d2ecebbc36072c2b68dd3cd8c5e30708e7dd808
Summary: Make a dummy single-scene dataset using the code from generate_cow_renders (used in existing NeRF tutorials)
Reviewed By: kjchalup
Differential Revision: D38116910
fbshipit-source-id: 8db6df7098aa221c81d392e5cd21b0e67f65bd70
Summary:
This large diff rewrites a significant portion of Implicitron's config hierarchy. The new hierarchy, and some of the default implementation classes, are as follows:
```
Experiment
data_source: ImplicitronDataSource
dataset_map_provider
data_loader_map_provider
model_factory: ImplicitronModelFactory
model: GenericModel
optimizer_factory: ImplicitronOptimizerFactory
training_loop: ImplicitronTrainingLoop
evaluator: ImplicitronEvaluator
```
1) Experiment (used to be ExperimentConfig) is now a top-level Configurable and contains as members mainly (mostly new) high-level factory Configurables.
2) Experiment's job is to run factories, do some accelerate setup and then pass the results to the main training loop.
3) ImplicitronOptimizerFactory and ImplicitronModelFactory are new high-level factories that create the optimizer, scheduler, model, and stats objects.
4) TrainingLoop is a new configurable that runs the main training loop and the inner train-validate step.
5) Evaluator is a new configurable that TrainingLoop uses to run validation/test steps.
6) GenericModel is not the only model choice anymore. Instead, ImplicitronModelBase (by default instantiated with GenericModel) is a member of Experiment and can be easily replaced by a custom implementation by the user.
All the new Configurables are children of ReplaceableBase, and can be easily replaced with custom implementations.
In addition, I added support for the exponential LR schedule, updated the config files and the test, as well as added a config file that reproduces NERF results and a test to run the repro experiment.
Reviewed By: bottler
Differential Revision: D37723227
fbshipit-source-id: b36bee880d6aa53efdd2abfaae4489d8ab1e8a27
Summary: Avoid calculating all_train_cameras before it is needed, because it is slow in some datasets.
Reviewed By: shapovalov
Differential Revision: D38037157
fbshipit-source-id: 95461226655cde2626b680661951ab17ebb0ec75
Summary: Add the conditioning types to the repro yaml files. In particular, this fixes test_conditioning_type.
Reviewed By: shapovalov
Differential Revision: D37914537
fbshipit-source-id: 621390f329d9da662d915eb3b7bc709206a20552