Summary: Implement `submeshes` for TexturesUV. Fix what Meshes.submeshes passes to the texture's submeshes function to make this possible.
Reviewed By: bottler
Differential Revision: D52192060
fbshipit-source-id: 526734962e3376aaf75654200164cdcebfff6997
Summary: Performance improvement: Use torch.lerp to map uv coordinates to the range needed for grid_sample (i.e. map [0, 1] to [-1, 1] and invert the y-axis)
Reviewed By: bottler
Differential Revision: D51961728
fbshipit-source-id: db19a5e3f482e9af7b96b20f88a1e5d0076dac43
Summary: Fixes lint in test_render_points in the PyTorch3D library.
Differential Revision: D51289841
fbshipit-source-id: 1eae621eb8e87b0fe5979f35acd878944f574a6a
Summary:
When the ply format looks as follows:
```
comment TextureFile ***.png
element vertex 892
property double x
property double y
property double z
property double nx
property double ny
property double nz
property double texture_u
property double texture_v
```
`MeshPlyFormat` class will read uv from the ply file and read the uv map as commented as TextureFile.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1100
Reviewed By: MichaelRamamonjisoa
Differential Revision: D50885176
Pulled By: bottler
fbshipit-source-id: be75b1ec9a17a1ed87dbcf846a9072ea967aec37
Summary:
The `chamfer_distance` function currently allows `"sum"` or `"mean"` reduction, but does not support returning unreduced (per-point) loss terms. Unreduced losses could be useful if the user wishes to inspect individual losses, or perform additional modifications to loss terms before reduction. One example would be implementing a robust kernel over the loss.
This PR adds a `None` option to the `point_reduction` parameter, similar to `batch_reduction`. In case of bi-directional chamfer loss, both the forward and backward distances are returned (a tuple of Tensors of shape `[D, N]` is returned). If normals are provided, similar logic applies to normals as well.
This PR addresses issue https://github.com/facebookresearch/pytorch3d/issues/622.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1605
Reviewed By: jcjohnson
Differential Revision: D48313857
Pulled By: bottler
fbshipit-source-id: 35c824827a143649b04166c4817449e1341b7fd9
Summary:
Convert ImplicitronRayBundle to a "classic" class instead of a dataclass. This change is introduced as a way to preserve the ImplicitronRayBundle interface while allowing two outcomes:
- init lengths arguments is now a Optional[torch.Tensor] instead of torch.Tensor
- lengths is now a property which returns a `torch.Tensor`. The lengths property will either recompute lengths from bins or return the stored _lengths. `_lenghts` is None if bins is set. It saves us a bit of memory.
Reviewed By: shapovalov
Differential Revision: D46686094
fbshipit-source-id: 3c75c0947216476ebff542b6f552d311024a679b
Summary:
## Context
Bins are used in mipnerf to allow to manipulate easily intervals. For example, by doing the following, `bins[..., :-1]` you will obtain all the left coordinates of your intervals, while doing `bins[..., 1:]` is equals to the right coordinates of your intervals.
We introduce here the support of bins like in MipNerf implementation.
## RayPointRefiner
Small changes have been made to modify RayPointRefiner.
- If bins is None
```
mids = torch.lerp(ray_bundle.lengths[..., 1:], ray_bundle.lengths[…, :-1], 0.5)
z_samples = sample_pdf(
mids, # [..., npt]
weights[..., 1:-1], # [..., npt - 1]
….
)
```
- If bins is not None
In the MipNerf implementation the sampling is done on all the bins. It allows us to use the full weights tensor without slashing it.
```
z_samples = sample_pdf(
ray_bundle.bins, # [..., npt + 1]
weights, # [..., npt]
...
)
```
## RayMarcher
Add a ray_deltas optional argument. If None, keep the same deltas computation from ray_lengths.
Reviewed By: shapovalov
Differential Revision: D46389092
fbshipit-source-id: d4f1963310065bd31c1c7fac1adfe11cbeaba606
Summary:
Add blurpool has defined in [MIP-NeRF](https://arxiv.org/abs/2103.13415).
It has been added has an option for RayPointRefiner.
Reviewed By: shapovalov
Differential Revision: D46356189
fbshipit-source-id: ad841bad86d2b591a68e1cb885d4f781cf26c111
Summary: Add a new implicit module Integral Position Encoding based on [MIP-NeRF](https://arxiv.org/abs/2103.13415).
Reviewed By: shapovalov
Differential Revision: D46352730
fbshipit-source-id: c6a56134c975d80052b3a11f5e92fd7d95cbff1e
Summary:
Introduce methods to approximate the radii of conical frustums along rays as described in [MipNerf](https://arxiv.org/abs/2103.13415):
- Two new attributes are added to ImplicitronRayBundle: bins and radii. Bins is of size n_pts_per_ray + 1. It allows us to manipulate easily and n_pts_per_ray intervals. For example we need the intervals coordinates in the radii computation for \(t_{\mu}, t_{\delta}\). Radii are used to store the radii of the conical frustums.
- Add 3 new methods to compute the radii:
- approximate_conical_frustum_as_gaussians: It computes the mean along the ray direction, the variance of the
conical frustum with respect to t and variance of the conical frustum with respect to its radius. This
implementation follows the stable computation defined in the paper.
- compute_3d_diagonal_covariance_gaussian: Will leverage the two previously computed variances to find the
diagonal covariance of the Gaussian.
- conical_frustum_to_gaussian: Mix everything together to compute the means and the diagonal covariances along
the ray of the Gaussians.
- In AbstractMaskRaySampler, introduces the attribute `cast_ray_bundle_as_cone`. If False it won't change the previous behaviour of the RaySampler. However if True, the samplers will sample `n_pts_per_ray +1` instead of `n_pts_per_ray`. This points are then used to set the bins attribute of ImplicitronRayBundle. The support of HeterogeneousRayBundle has not been added since the current code does not allow it. A safeguard has been added to avoid a silent bug in the future.
Reviewed By: shapovalov
Differential Revision: D45269190
fbshipit-source-id: bf22fad12d71d55392f054e3f680013aa0d59b78
Summary: Make test work in isolation, and when run internally make it not try the sqlalchemy files.
Reviewed By: shapovalov
Differential Revision: D46352513
fbshipit-source-id: 7417a25d7a5347d937631c9f56ae4e3242dd622e
Summary: Making it easier for the clients to use these datasets.
Reviewed By: bottler
Differential Revision: D46727179
fbshipit-source-id: cf619aee4c4c0222a74b30ea590cf37f08f014cc
Summary:
Adds stratified sampling of sequences within categories applied after category / sequence filters but before the num sequence limit.
It respects the insertion order into the sequence_annots table, i.e. takes top N sequences within each category.
Reviewed By: bottler
Differential Revision: D46724002
fbshipit-source-id: 597cb2a795c3f3bc07f838fc51b4e95a4f981ad3
Summary: Single directional chamfer distance and option to use non-absolute cosine similarity
Reviewed By: bottler
Differential Revision: D46593980
fbshipit-source-id: b2e591706a0cdde1c2d361614cecebb84a581433
Summary: Fine implicit function was called before the coarse implicit function.
Reviewed By: shapovalov
Differential Revision: D46224224
fbshipit-source-id: 6b1cc00cc823d3ea7a5b42774c9ec3b73a69edb5
Summary:
1. We may need to store arrays of unknown shape in the database. It implements and tests serialisation.
2. Previously, when an inexisting metadata file was passed to SqlIndexDataset, it would try to open it and create an empty file, then crash. We now open the file in a read-only mode, so the error message is more intuitive. Note that the implementation is SQLite specific.
Reviewed By: bottler
Differential Revision: D46047857
fbshipit-source-id: 3064ae4f8122b4fc24ad3d6ab696572ebe8d0c26
Summary: I don't know why RE tests sometimes fail here, but maybe it's a race condition. If that's right, this should fix it.
Reviewed By: shapovalov
Differential Revision: D46020054
fbshipit-source-id: 20b746b09ad9bd77c2601ac681047ccc6cc27ed9
Summary:
This is mostly a refactoring diff to reduce friction in extending the frame data.
Slight functional changes: dataset getitem now accepts (seq_name, frame_number_as_singleton_tensor) as a non-advertised feature. Otherwise this code crashes:
```
item = dataset[0]
dataset[item.sequence_name, item.frame_number]
```
Reviewed By: bottler
Differential Revision: D45780175
fbshipit-source-id: 75b8e8d3dabed954a804310abdbd8ab44a8dea29
Summary:
Although we can load per-vertex normals in `load_obj`, saving per-vertex normals is not supported in `save_obj`.
This patch fixes this by allowing passing per-vertex normal data in `save_obj`:
``` python
def save_obj(
f: PathOrStr,
verts,
faces,
decimal_places: Optional[int] = None,
path_manager: Optional[PathManager] = None,
*,
verts_normals: Optional[torch.Tensor] = None,
faces_normals: Optional[torch.Tensor] = None,
verts_uvs: Optional[torch.Tensor] = None,
faces_uvs: Optional[torch.Tensor] = None,
texture_map: Optional[torch.Tensor] = None,
) -> None:
"""
Save a mesh to an .obj file.
Args:
f: File (str or path) to which the mesh should be written.
verts: FloatTensor of shape (V, 3) giving vertex coordinates.
faces: LongTensor of shape (F, 3) giving faces.
decimal_places: Number of decimal places for saving.
path_manager: Optional PathManager for interpreting f if
it is a str.
verts_normals: FloatTensor of shape (V, 3) giving the normal per vertex.
faces_normals: LongTensor of shape (F, 3) giving the index into verts_normals
for each vertex in the face.
verts_uvs: FloatTensor of shape (V, 2) giving the uv coordinate per vertex.
faces_uvs: LongTensor of shape (F, 3) giving the index into verts_uvs for
each vertex in the face.
texture_map: FloatTensor of shape (H, W, 3) representing the texture map
for the mesh which will be saved as an image. The values are expected
to be in the range [0, 1],
"""
```
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1511
Reviewed By: shapovalov
Differential Revision: D45086045
Pulled By: bottler
fbshipit-source-id: 666efb0d2c302df6cf9f2f6601d83a07856bf32f
Summary:
I forgot to include these tests to D45086611 when transferring code from pixar_replay repo.
They test the new ORM types used in SQL dataset and are SQL Alchemy 2.0 specific.
An important test for extending types is a proof of concept for generality of SQL Dataset. The idea is to extend FrameAnnotation and FrameData in parallel.
Reviewed By: bottler
Differential Revision: D45529284
fbshipit-source-id: 2a634e518f580c312602107c85fc320db43abcf5
Summary:
Added a suit of functions and code additions to experimental_gltf_io.py file to enable saving Meshes in TexturesVertex format into .glb file.
Also added a test to tets_io_gltf.py to check the functionality with the test described in Test Plane.
Reviewed By: bottler
Differential Revision: D44969144
fbshipit-source-id: 9ce815a1584b510442fa36cc4dbc8d41cc3786d5
Summary:
Moving SQL dataset to PyTorch3D. It has been extensively tested in pixar_replay.
It requires SQLAlchemy 2.0, which is not supported in fbcode. So I exclude the sources and tests that depend on it from buck TARGETS.
Reviewed By: bottler
Differential Revision: D45086611
fbshipit-source-id: 0285f03e5824c0478c70ad13731525bb5ec7deef
Summary:
We currently support caching bounding boxes in MaskAnnotation. If present, they are not re-computed from the mask. However, the masks need to be loaded for the bbox to be set.
This diff fixes that. Even if load_masks / load_blobs are unset, the bounding box can be picked up from the metadata.
Reviewed By: bottler
Differential Revision: D45144918
fbshipit-source-id: 8a2e2c115e96070b6fcdc29cbe57e1cee606ddcd
Summary: The code does not crash if depth map/mask are not given.
Reviewed By: bottler
Differential Revision: D45082985
fbshipit-source-id: 3610d8beb4ac897fbbe52f56a6dd012a6365b89b
Summary: Provide an extension point pre_expand to let a configurable class A make sure another class B is registered before A is expanded. This reduces top level imports.
Reviewed By: bottler
Differential Revision: D44504122
fbshipit-source-id: c418bebbe6d33862d239be592d9751378eee3a62
Summary:
Introduces the OverfitModel for NeRF-style training with overfitting to one scene.
It is a specific case of GenericModel. It has been disentangle to ease usage.
## General modification
1. Modularize a minimum GenericModel to introduce OverfitModel
2. Introduce OverfitModel and ensure through unit testing that it behaves like GenericModel.
## Modularization
The following methods have been extracted from GenericModel to allow modularity with ManyViewModel:
- get_objective is now a call to weighted_sum_losses
- log_loss_weights
- prepare_inputs
The generic methods have been moved to an utils.py file.
Simplify the code to introduce OverfitModel.
Private methods like chunk_generator are now public and can now be used by ManyViewModel.
Reviewed By: shapovalov
Differential Revision: D43771992
fbshipit-source-id: 6102aeb21c7fdd56aa2ff9cd1dd23fd9fbf26315
Summary: Indexing with a big matrix now fails with a ValueError, possibly because of pytorch improvements. Remove the testcase for it.
Reviewed By: davidsonic
Differential Revision: D42609741
fbshipit-source-id: 0a5a6632ed199cb942bfc4cc4ed347b72e491125
Summary: For the new API, filtering iterators over sequences by subsets is quite helpful. The change is backwards compatible.
Reviewed By: bottler
Differential Revision: D42739669
fbshipit-source-id: d150a404aeaf42fd04a81304c63a4cba203f897d
Summary:
Fixes some issues with RayBundle plotting:
- allows plotting raybundles on gpu
- view -> reshape since we do not require contiguous raybundle tensors as input
Reviewed By: bottler, shapovalov
Differential Revision: D42665923
fbshipit-source-id: e9c6c7810428365dca4cb5ec80ef15ff28644163
Summary: Use IndexError so that a camera object is an iterable
Reviewed By: shapovalov
Differential Revision: D42312021
fbshipit-source-id: 67c417d5f1398e8b30a6944468eda057b4ceb444
Summary: Make GLB files report their own length correctly. They were off by 28.
Reviewed By: davidsonic
Differential Revision: D41838340
fbshipit-source-id: 9cd66e8337c142298d5ae1d7c27e51fd812d5c7b
Summary: Write the amalgamated mesh from the Mesh module to glb. In this version, the json header and the binary data specified by the buffer are merged into glb. The image texture attributes are added.
Reviewed By: bottler
Differential Revision: D41489778
fbshipit-source-id: 3af0e9a8f9e9098e73737a254177802e0fb6bd3c
Summary:
Rasterize MC was not adapted to heterogeneous bundles.
There are some caveats though:
1) on CO3D, we get up to 18 points per image, which is too few for a reasonable visualisation (see below);
2) rasterising for a batch of 100 is slow.
I also moved the unpacking code close to the bundle to be able to reuse it.
{F789678778}
Reviewed By: bottler, davnov134
Differential Revision: D41008600
fbshipit-source-id: 9f10f1f9f9a174cf8c534b9b9859587d69832b71
Summary: Fix indexing of directions after filtering of points by scaffold.
Reviewed By: shapovalov
Differential Revision: D40853482
fbshipit-source-id: 9cfdb981e97cb82edcd27632c5848537ed2c6837
Summary:
Allows loading of multiple categories.
Multiple categories are provided in a comma-separated list of category names.
Reviewed By: bottler, shapovalov
Differential Revision: D40803297
fbshipit-source-id: 863938be3aa6ffefe9e563aede4a2e9e66aeeaa8
Summary:
According to the profiler trace D40326775, _check_valid_rotation_matrix is slow because of aten::all_close operation and _safe_det_3x3 bottlenecks. Disable the check by default unless environment variable PYTORCH3D_CHECK_ROTATION_MATRICES is set to 1.
Comparison after applying the change:
```
Profiling/Function get_world_to_view (ms) Transform_points(ms) specular(ms)
before 12.751 18.577 21.384
after 4.432 (34.7%) 9.248 (49.8%) 11.507 (53.8%)
```
Profiling trace:
https://pxl.cl/2h687
More details in https://docs.google.com/document/d/1kfhEQfpeQToikr5OH9ZssM39CskxWoJ2p8DO5-t6eWk/edit?usp=sharing
Reviewed By: kjchalup
Differential Revision: D40442503
fbshipit-source-id: 954b58de47de235c9d93af441643c22868b547d0