Summary: Linear followed by exponential LR progression. Needed for making Blender scenes converge.
Reviewed By: kjchalup
Differential Revision: D38557007
fbshipit-source-id: ad630dbc5b8fabcb33eeb5bdeed5e4f31360bac2
Summary:
LLFF (and most/all non-synth datasets) will have no background/foreground distinction. Add support for data with no fg mask.
Also, we had a bug in stats loading, like this:
* Load stats
* One of the stats has a history of length 0
* That's fine, e.g. maybe it's fg_error but the dataset has no notion of fg/bg. So leave it as len 0
* Check whether all the stats have the same history length as an arbitrarily chosen "reference-stat"
* Ooops the reference-stat happened to be the stat with length 0
* assert (legit_stat_len == reference_stat_len (=0)) ---> failed assert
Also some minor fixes (from Jeremy's other diff) to support LLFF
Reviewed By: davnov134
Differential Revision: D38475272
fbshipit-source-id: 5b35ac86d1d5239759f537621f41a3aa4eb3bd68
Summary: In a multisequence (fewview) scenario, it does not make sense to use all cameras for evaluating the difficulty as they come from different scenes. Using only this batch’s source (known) cameras instead.
Reviewed By: bottler
Differential Revision: D38491070
fbshipit-source-id: d6312d8fbb125b28a33db9f53d4215bcd1ca28a8
Summary:
Request in https://github.com/facebookresearch/pytorch3d/issues/1233 for option to disable CUDA build.
Also option to disable binary build completely. This could be useful e.g. in the config tutorial where we need a small python-only part of pytorch3d.
Reviewed By: kjchalup
Differential Revision: D38458624
fbshipit-source-id: 421a0b1cc31306d7e322d3e743e30a7533d7f034
Summary:
Misc fixes.
- most important: the mac image is gone so switch to a newer one.
- torch.concat is new; was used accidentally
- remove lpips from testing in meta.yaml as it is breaking the conda test. Better to leave the relevant tests failing in OSS.
- TypedDict usage is breaking implicitron on Python 3.7.
Reviewed By: patricklabatut
Differential Revision: D38458164
fbshipit-source-id: b16c26453a743b9a771e2a6787b9a4d2a52e41c2
Summary: This field is specific to one purpose.
Reviewed By: patricklabatut
Differential Revision: D38424891
fbshipit-source-id: e017304497012430c30e436da7052b9ad6fc7614
Summary: One way to tidy the installation so we don't install files in site-packages/projects. Fixes https://github.com/facebookresearch/pytorch3d/issues/1279
Reviewed By: shapovalov, davnov134
Differential Revision: D38426772
fbshipit-source-id: ac1a54fbf230adb53904701e1f38bf9567f647ce
Summary: Don't copy from one part of config to another, rather do the copy within GenericModel.
Reviewed By: davnov134
Differential Revision: D38248828
fbshipit-source-id: ff8af985c37ea1f7df9e0aa0a45a58df34c3f893
Summary: Made the config system call open_dict when it calls the tweak function.
Reviewed By: shapovalov
Differential Revision: D38315334
fbshipit-source-id: 5924a92d8d0bf399bbf3788247f81fc990e265e7
Summary:
Stats are logically connected to the training loop, not to the model. Hence, moving to the training loop.
Also removing resume_epoch from OptimizerFactory in favor of a single place - ModelFactory. This removes the need for config consistency checks etc.
Reviewed By: kjchalup
Differential Revision: D38313475
fbshipit-source-id: a1d188a63e28459df381ff98ad8acdcdb14887b7
Summary: Before this diff, train_stats.py would not be created by default, EXCEPT when resuming training. This makes them appear from start.
Reviewed By: shapovalov
Differential Revision: D38320341
fbshipit-source-id: 8ea5b99ec81c377ae129f58e78dc2eaff94821ad
Summary: Remove the dataset's need to provide the task type.
Reviewed By: davnov134, kjchalup
Differential Revision: D38314000
fbshipit-source-id: 3805d885b5d4528abdc78c0da03247edb9abf3f7
Summary:
Added _NEED_CONTROL
to JsonIndexDatasetMapProviderV2 and made dataset_tweak_args use it.
Reviewed By: bottler
Differential Revision: D38313914
fbshipit-source-id: 529847571065dfba995b609a66737bd91e002cfe
Summary: Only import it if you ask for it.
Reviewed By: kjchalup
Differential Revision: D38327167
fbshipit-source-id: 3f05231f26eda582a63afc71b669996342b0c6f9
Summary: Made eval_batches be set in call to `__init__` not after the construction as they were before
Reviewed By: bottler
Differential Revision: D38275943
fbshipit-source-id: 32737401d1ddd16c284e1851b7a91f8b041c406f
Summary: Currently, seeds are set only inside the train loop. But this does not ensure that the model weights are initialized the same way everywhere which makes all experiments irreproducible. This diff fixes it.
Reviewed By: bottler
Differential Revision: D38315840
fbshipit-source-id: 3d2ecebbc36072c2b68dd3cd8c5e30708e7dd808
Summary: Make a dummy single-scene dataset using the code from generate_cow_renders (used in existing NeRF tutorials)
Reviewed By: kjchalup
Differential Revision: D38116910
fbshipit-source-id: 8db6df7098aa221c81d392e5cd21b0e67f65bd70
Summary:
This large diff rewrites a significant portion of Implicitron's config hierarchy. The new hierarchy, and some of the default implementation classes, are as follows:
```
Experiment
data_source: ImplicitronDataSource
dataset_map_provider
data_loader_map_provider
model_factory: ImplicitronModelFactory
model: GenericModel
optimizer_factory: ImplicitronOptimizerFactory
training_loop: ImplicitronTrainingLoop
evaluator: ImplicitronEvaluator
```
1) Experiment (used to be ExperimentConfig) is now a top-level Configurable and contains as members mainly (mostly new) high-level factory Configurables.
2) Experiment's job is to run factories, do some accelerate setup and then pass the results to the main training loop.
3) ImplicitronOptimizerFactory and ImplicitronModelFactory are new high-level factories that create the optimizer, scheduler, model, and stats objects.
4) TrainingLoop is a new configurable that runs the main training loop and the inner train-validate step.
5) Evaluator is a new configurable that TrainingLoop uses to run validation/test steps.
6) GenericModel is not the only model choice anymore. Instead, ImplicitronModelBase (by default instantiated with GenericModel) is a member of Experiment and can be easily replaced by a custom implementation by the user.
All the new Configurables are children of ReplaceableBase, and can be easily replaced with custom implementations.
In addition, I added support for the exponential LR schedule, updated the config files and the test, as well as added a config file that reproduces NERF results and a test to run the repro experiment.
Reviewed By: bottler
Differential Revision: D37723227
fbshipit-source-id: b36bee880d6aa53efdd2abfaae4489d8ab1e8a27
Summary:
This is an internal change in the config systen. It allows redefining a pluggable implementation with new default values. This is useful in notebooks / interactive use. For example, this now works.
class A(ReplaceableBase):
pass
registry.register
class B(A):
i: int = 4
class C(Configurable):
a: A
a_class_type: str = "B"
def __post_init__(self):
run_auto_creation(self)
expand_args_fields(C)
registry.register
class B(A):
i: int = 5
c = C()
assert c.a.i == 5
Reviewed By: shapovalov
Differential Revision: D38219371
fbshipit-source-id: 72911a9bd3426d3359cf8802cc016fc7f6d7713b
Summary:
Adding MeshRasterizerOpenGL, a faster alternative to MeshRasterizer. The new rasterizer follows the ideas from "Differentiable Surface Rendering via non-Differentiable Sampling".
The new rasterizer 20x faster on a 2M face mesh (try pose optimization on Nefertiti from https://www.cs.cmu.edu/~kmcrane/Projects/ModelRepository/!). The larger the mesh, the larger the speedup.
There are two main disadvantages:
* The new rasterizer works with an OpenGL backend, so requires pycuda.gl and pyopengl installed (though we avoided writing any C++ code, everything is in Python!)
* The new rasterizer is non-differentiable. However, you can still differentiate the rendering function if you use if with the new SplatterPhongShader which we recently added to PyTorch3D (see the original paper cited above).
Reviewed By: patricklabatut, jcjohnson
Differential Revision: D37698816
fbshipit-source-id: 54d120639d3cb001f096237807e54aced0acda25
Summary:
EGLContext is a utility to render with OpenGL without an attached display (that is, without a monitor).
DeviceContextManager allows us to avoid unnecessary context creations and releases. See docstrings for more info.
Reviewed By: jcjohnson
Differential Revision: D36562551
fbshipit-source-id: eb0d2a2f85555ee110e203d435a44ad243281d2c
Summary: Avoid calculating all_train_cameras before it is needed, because it is slow in some datasets.
Reviewed By: shapovalov
Differential Revision: D38037157
fbshipit-source-id: 95461226655cde2626b680661951ab17ebb0ec75
Summary: We especially need omegaconf when testing impicitron.
Reviewed By: patricklabatut
Differential Revision: D37921440
fbshipit-source-id: 4e66fde35aa29f60eabd92bf459cd584cfd7e5ca
Summary:
X-link: https://github.com/fairinternal/pytorch3d/pull/39
Blender and LLFF cameras were sending screen space focal length and principal point to a camera init function expecting NDC
Reviewed By: shapovalov
Differential Revision: D37788686
fbshipit-source-id: 2ddf7436248bc0d174eceb04c288b93858138582
Summary: Add the conditioning types to the repro yaml files. In particular, this fixes test_conditioning_type.
Reviewed By: shapovalov
Differential Revision: D37914537
fbshipit-source-id: 621390f329d9da662d915eb3b7bc709206a20552
Summary:
I tried to run `experiment.py` and `pytorch3d_implicitron_runner` and faced the failure with this traceback: https://www.internalfb.com/phabricator/paste/view/P515734086
It seems to be due to the new release of OmegaConf (version=2.2.2) which requires different typing. This fix helped to overcome it.
Reviewed By: bottler
Differential Revision: D37881644
fbshipit-source-id: be0cd4ced0526f8382cea5bdca9b340e93a2fba2
Summary:
EPnP fails the test when the number of points is below 6. As suggested, quadratic option is in theory to deal with as few as 4 points (so num_pts_thresh=3 is set). And when num_pts > num_pts_thresh=4, skip_q is False.
To avoid bumping num_pts_thresh while passing all the original tests, check_output is set to False when num_pts < 6, similar to the logic in Line 123-127. It makes sure that the algo doesn't crash.
Reviewed By: shapovalov
Differential Revision: D37804438
fbshipit-source-id: 74576d63a9553e25e3ec344677edb6912b5f9354