Summary: We don't want to use print directly in stats.print() method. Instead this method will return the output string to the caller.
Reviewed By: shapovalov
Differential Revision: D45356240
fbshipit-source-id: 2cabe3cdfb9206bf09aa7b3cdd2263148a5ba145
Summary: We don’t see much value in reporting metrics by camera difficulty while supporting that in new datasets is quite painful, hence deprecating training cameras in the data API and ignoring in evaluation.
Reviewed By: bottler
Differential Revision: D42678879
fbshipit-source-id: aad511f6cb2ca82745f31c19594e1d80594b61d7
Summary: Loads the whole dataset and moves it to the device and sends it to for sampling to enable full dataset heterogeneous raysampling.
Reviewed By: bottler
Differential Revision: D39263009
fbshipit-source-id: c527537dfc5f50116849656c9e171e868f6845b1
Summary:
Adds yaml configs to train selected methods on CO3Dv2.
Few more updates:
1) moved some fields to base classes so that we can check is_multisequence in experiment.py
2) skip loading all train cameras for multisequence datasets, without this, co3d-fewview is untrainable
3) fix bug in json index dataset provider v2
Reviewed By: kjchalup
Differential Revision: D38952755
fbshipit-source-id: 3edac6fc8e20775aa70400bd73a0e6d52b091e0c
Summary:
LLFF (and most/all non-synth datasets) will have no background/foreground distinction. Add support for data with no fg mask.
Also, we had a bug in stats loading, like this:
* Load stats
* One of the stats has a history of length 0
* That's fine, e.g. maybe it's fg_error but the dataset has no notion of fg/bg. So leave it as len 0
* Check whether all the stats have the same history length as an arbitrarily chosen "reference-stat"
* Ooops the reference-stat happened to be the stat with length 0
* assert (legit_stat_len == reference_stat_len (=0)) ---> failed assert
Also some minor fixes (from Jeremy's other diff) to support LLFF
Reviewed By: davnov134
Differential Revision: D38475272
fbshipit-source-id: 5b35ac86d1d5239759f537621f41a3aa4eb3bd68
Summary:
Stats are logically connected to the training loop, not to the model. Hence, moving to the training loop.
Also removing resume_epoch from OptimizerFactory in favor of a single place - ModelFactory. This removes the need for config consistency checks etc.
Reviewed By: kjchalup
Differential Revision: D38313475
fbshipit-source-id: a1d188a63e28459df381ff98ad8acdcdb14887b7
Summary: Remove the dataset's need to provide the task type.
Reviewed By: davnov134, kjchalup
Differential Revision: D38314000
fbshipit-source-id: 3805d885b5d4528abdc78c0da03247edb9abf3f7
Summary: Currently, seeds are set only inside the train loop. But this does not ensure that the model weights are initialized the same way everywhere which makes all experiments irreproducible. This diff fixes it.
Reviewed By: bottler
Differential Revision: D38315840
fbshipit-source-id: 3d2ecebbc36072c2b68dd3cd8c5e30708e7dd808
Summary:
This large diff rewrites a significant portion of Implicitron's config hierarchy. The new hierarchy, and some of the default implementation classes, are as follows:
```
Experiment
data_source: ImplicitronDataSource
dataset_map_provider
data_loader_map_provider
model_factory: ImplicitronModelFactory
model: GenericModel
optimizer_factory: ImplicitronOptimizerFactory
training_loop: ImplicitronTrainingLoop
evaluator: ImplicitronEvaluator
```
1) Experiment (used to be ExperimentConfig) is now a top-level Configurable and contains as members mainly (mostly new) high-level factory Configurables.
2) Experiment's job is to run factories, do some accelerate setup and then pass the results to the main training loop.
3) ImplicitronOptimizerFactory and ImplicitronModelFactory are new high-level factories that create the optimizer, scheduler, model, and stats objects.
4) TrainingLoop is a new configurable that runs the main training loop and the inner train-validate step.
5) Evaluator is a new configurable that TrainingLoop uses to run validation/test steps.
6) GenericModel is not the only model choice anymore. Instead, ImplicitronModelBase (by default instantiated with GenericModel) is a member of Experiment and can be easily replaced by a custom implementation by the user.
All the new Configurables are children of ReplaceableBase, and can be easily replaced with custom implementations.
In addition, I added support for the exponential LR schedule, updated the config files and the test, as well as added a config file that reproduces NERF results and a test to run the repro experiment.
Reviewed By: bottler
Differential Revision: D37723227
fbshipit-source-id: b36bee880d6aa53efdd2abfaae4489d8ab1e8a27