Summary:
generic_model_args no longer exists. Update some references to it, mostly in doc.
This fixes the testing of all the yaml files in test_forward pass.
Reviewed By: shapovalov
Differential Revision: D38789202
fbshipit-source-id: f11417efe772d7f86368b3598aa66c52b1309dbf
Summary:
We identified that these logging statements can deteriorate performance in certain cases. I propose removing them from the regular renderer implementation and letting individuals re-insert debug logging wherever needed on a case-by-case basis.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1260
Reviewed By: kjchalup
Differential Revision: D38737439
Pulled By: bottler
fbshipit-source-id: cf9dcbbeae4dbf214c2e17d5bafa00b2ff796393
Summary: Useful for visualising colmap output where some frames are not correctly registered.
Reviewed By: bottler
Differential Revision: D38743191
fbshipit-source-id: e823df2997870dc41d76784e112d4349f904d311
Summary: Previously, "psnr" was evaluated between the masked g.t. image and the render. To avoid confusion, "psnr" is now renamed to "psnr_masked".
Reviewed By: bottler
Differential Revision: D38707511
fbshipit-source-id: 8ee881ab1a05453d6692dde9782333a47d8c1234
Summary: Reports also the PSNR between the unmasked G.T. image and the render.
Reviewed By: bottler
Differential Revision: D38655943
fbshipit-source-id: 1603a2d02116ea1ce037e5530abe1afc65a2ba93
Summary: This makes the new volumes tutorial work on google colab.
Reviewed By: kjchalup
Differential Revision: D38501906
fbshipit-source-id: a606a357e929dae903dc4d9067bd1519f05b1458
Summary:
LLFF (and most/all non-synth datasets) will have no background/foreground distinction. Add support for data with no fg mask.
Also, we had a bug in stats loading, like this:
* Load stats
* One of the stats has a history of length 0
* That's fine, e.g. maybe it's fg_error but the dataset has no notion of fg/bg. So leave it as len 0
* Check whether all the stats have the same history length as an arbitrarily chosen "reference-stat"
* Ooops the reference-stat happened to be the stat with length 0
* assert (legit_stat_len == reference_stat_len (=0)) ---> failed assert
Also some minor fixes (from Jeremy's other diff) to support LLFF
Reviewed By: davnov134
Differential Revision: D38475272
fbshipit-source-id: 5b35ac86d1d5239759f537621f41a3aa4eb3bd68
Summary: In a multisequence (fewview) scenario, it does not make sense to use all cameras for evaluating the difficulty as they come from different scenes. Using only this batch’s source (known) cameras instead.
Reviewed By: bottler
Differential Revision: D38491070
fbshipit-source-id: d6312d8fbb125b28a33db9f53d4215bcd1ca28a8
Summary:
Misc fixes.
- most important: the mac image is gone so switch to a newer one.
- torch.concat is new; was used accidentally
- remove lpips from testing in meta.yaml as it is breaking the conda test. Better to leave the relevant tests failing in OSS.
- TypedDict usage is breaking implicitron on Python 3.7.
Reviewed By: patricklabatut
Differential Revision: D38458164
fbshipit-source-id: b16c26453a743b9a771e2a6787b9a4d2a52e41c2
Summary: This field is specific to one purpose.
Reviewed By: patricklabatut
Differential Revision: D38424891
fbshipit-source-id: e017304497012430c30e436da7052b9ad6fc7614
Summary: Don't copy from one part of config to another, rather do the copy within GenericModel.
Reviewed By: davnov134
Differential Revision: D38248828
fbshipit-source-id: ff8af985c37ea1f7df9e0aa0a45a58df34c3f893
Summary: Made the config system call open_dict when it calls the tweak function.
Reviewed By: shapovalov
Differential Revision: D38315334
fbshipit-source-id: 5924a92d8d0bf399bbf3788247f81fc990e265e7
Summary:
Stats are logically connected to the training loop, not to the model. Hence, moving to the training loop.
Also removing resume_epoch from OptimizerFactory in favor of a single place - ModelFactory. This removes the need for config consistency checks etc.
Reviewed By: kjchalup
Differential Revision: D38313475
fbshipit-source-id: a1d188a63e28459df381ff98ad8acdcdb14887b7
Summary: Remove the dataset's need to provide the task type.
Reviewed By: davnov134, kjchalup
Differential Revision: D38314000
fbshipit-source-id: 3805d885b5d4528abdc78c0da03247edb9abf3f7
Summary:
Added _NEED_CONTROL
to JsonIndexDatasetMapProviderV2 and made dataset_tweak_args use it.
Reviewed By: bottler
Differential Revision: D38313914
fbshipit-source-id: 529847571065dfba995b609a66737bd91e002cfe
Summary: Only import it if you ask for it.
Reviewed By: kjchalup
Differential Revision: D38327167
fbshipit-source-id: 3f05231f26eda582a63afc71b669996342b0c6f9
Summary: Made eval_batches be set in call to `__init__` not after the construction as they were before
Reviewed By: bottler
Differential Revision: D38275943
fbshipit-source-id: 32737401d1ddd16c284e1851b7a91f8b041c406f
Summary: Make a dummy single-scene dataset using the code from generate_cow_renders (used in existing NeRF tutorials)
Reviewed By: kjchalup
Differential Revision: D38116910
fbshipit-source-id: 8db6df7098aa221c81d392e5cd21b0e67f65bd70
Summary:
This large diff rewrites a significant portion of Implicitron's config hierarchy. The new hierarchy, and some of the default implementation classes, are as follows:
```
Experiment
data_source: ImplicitronDataSource
dataset_map_provider
data_loader_map_provider
model_factory: ImplicitronModelFactory
model: GenericModel
optimizer_factory: ImplicitronOptimizerFactory
training_loop: ImplicitronTrainingLoop
evaluator: ImplicitronEvaluator
```
1) Experiment (used to be ExperimentConfig) is now a top-level Configurable and contains as members mainly (mostly new) high-level factory Configurables.
2) Experiment's job is to run factories, do some accelerate setup and then pass the results to the main training loop.
3) ImplicitronOptimizerFactory and ImplicitronModelFactory are new high-level factories that create the optimizer, scheduler, model, and stats objects.
4) TrainingLoop is a new configurable that runs the main training loop and the inner train-validate step.
5) Evaluator is a new configurable that TrainingLoop uses to run validation/test steps.
6) GenericModel is not the only model choice anymore. Instead, ImplicitronModelBase (by default instantiated with GenericModel) is a member of Experiment and can be easily replaced by a custom implementation by the user.
All the new Configurables are children of ReplaceableBase, and can be easily replaced with custom implementations.
In addition, I added support for the exponential LR schedule, updated the config files and the test, as well as added a config file that reproduces NERF results and a test to run the repro experiment.
Reviewed By: bottler
Differential Revision: D37723227
fbshipit-source-id: b36bee880d6aa53efdd2abfaae4489d8ab1e8a27
Summary:
This is an internal change in the config systen. It allows redefining a pluggable implementation with new default values. This is useful in notebooks / interactive use. For example, this now works.
class A(ReplaceableBase):
pass
registry.register
class B(A):
i: int = 4
class C(Configurable):
a: A
a_class_type: str = "B"
def __post_init__(self):
run_auto_creation(self)
expand_args_fields(C)
registry.register
class B(A):
i: int = 5
c = C()
assert c.a.i == 5
Reviewed By: shapovalov
Differential Revision: D38219371
fbshipit-source-id: 72911a9bd3426d3359cf8802cc016fc7f6d7713b
Summary:
Adding MeshRasterizerOpenGL, a faster alternative to MeshRasterizer. The new rasterizer follows the ideas from "Differentiable Surface Rendering via non-Differentiable Sampling".
The new rasterizer 20x faster on a 2M face mesh (try pose optimization on Nefertiti from https://www.cs.cmu.edu/~kmcrane/Projects/ModelRepository/!). The larger the mesh, the larger the speedup.
There are two main disadvantages:
* The new rasterizer works with an OpenGL backend, so requires pycuda.gl and pyopengl installed (though we avoided writing any C++ code, everything is in Python!)
* The new rasterizer is non-differentiable. However, you can still differentiate the rendering function if you use if with the new SplatterPhongShader which we recently added to PyTorch3D (see the original paper cited above).
Reviewed By: patricklabatut, jcjohnson
Differential Revision: D37698816
fbshipit-source-id: 54d120639d3cb001f096237807e54aced0acda25
Summary:
EGLContext is a utility to render with OpenGL without an attached display (that is, without a monitor).
DeviceContextManager allows us to avoid unnecessary context creations and releases. See docstrings for more info.
Reviewed By: jcjohnson
Differential Revision: D36562551
fbshipit-source-id: eb0d2a2f85555ee110e203d435a44ad243281d2c
Summary: Avoid calculating all_train_cameras before it is needed, because it is slow in some datasets.
Reviewed By: shapovalov
Differential Revision: D38037157
fbshipit-source-id: 95461226655cde2626b680661951ab17ebb0ec75
Summary:
X-link: https://github.com/fairinternal/pytorch3d/pull/39
Blender and LLFF cameras were sending screen space focal length and principal point to a camera init function expecting NDC
Reviewed By: shapovalov
Differential Revision: D37788686
fbshipit-source-id: 2ddf7436248bc0d174eceb04c288b93858138582
Summary: Removing 1 from the crop mask does not seem sensible.
Reviewed By: bottler, shapovalov
Differential Revision: D37843680
fbshipit-source-id: 70cec80f9ea26deac63312da62b9c8af27d2a010
Summary:
1. Random sampling of num batches without replacement not supported.
2.Providers should implement the interface for the training loop to work.
Reviewed By: bottler, davnov134
Differential Revision: D37815388
fbshipit-source-id: 8a2795b524e733f07346ffdb20a9c0eb1a2b8190
Summary: one more bugfix in JsonIndexDataset
Reviewed By: bottler
Differential Revision: D37789138
fbshipit-source-id: 2fb2bda7448674091ff6b279175f0bbd16ff7a62
Summary:
This fixes a indexing bug in HardDepthShader and adds proper unit tests for both of the DepthShaders. This bug was introduced when updating the shader sizes and discovered when I switched my local model onto pytorch3d trunk instead of the patched copy.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1252
Test Plan:
Unit test + custom model code
```
pytest tests/test_shader.py
```

Reviewed By: bottler
Differential Revision: D37775767
Pulled By: d4l3k
fbshipit-source-id: 5f001903985976d7067d1fa0a3102d602790e3e8
Summary:
For 3D segmentation problems it's really useful to be able to train the models from multiple viewpoints using Pytorch3D as the renderer. Currently due to hardcoded assumptions in a few spots the mesh renderer only supports rendering RGB (3 dimensional) data. You can encode the classification information as 3 channel data but if you have more than 3 classes you're out of luck.
This relaxes the assumptions to make rendering semantic classes work with `HardFlatShader` and `AmbientLights` with no diffusion/specular. The other shaders/lights don't make any sense for classification since they mutate the texture values in some way.
This only requires changes in `Materials` and `AmbientLights`. The bulk of the code is the unit test.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1248
Test Plan: Added unit test that renders a 5 dimensional texture and compare dimensions 2-5 to a stored picture.
Reviewed By: bottler
Differential Revision: D37764610
Pulled By: d4l3k
fbshipit-source-id: 031895724d9318a6f6bab5b31055bb3f438176a5