Summary: This commit performs pulsar example and test refinements. The examples are fully adjusted to adhere to PEP style guide and additional comments are added.
Reviewed By: nikhilaravi
Differential Revision: D24723391
fbshipit-source-id: 6d289006f080140159731e7f3a8c98b582164f1a
Summary:
This diff builds on top of the `pulsar integration` diff to provide a unified interface for the existing PyTorch3D point renderer and Pulsar. For more information about the pulsar backend, see the release notes and the paper (https://arxiv.org/abs/2004.07484). For information on how to use the backend, see the point cloud rendering notebook and the examples in the folder docs/examples.
The unified interfaces are completely consistent. Switching the render backend is as easy as using `renderer = PulsarPointsRenderer(rasterizer=rasterizer).to(device)` instead of `renderer = PointsRenderer(rasterizer=rasterizer, compositor=compositor)` and adding the `gamma` parameter to the forward function. All PyTorch3D camera types are supported as far as possible; keyword arguments are properly forwarded to the camera. The `PerspectiveCamera` and `OrthographicCamera` require znear and zfar as additional parameters for the forward pass.
Reviewed By: nikhilaravi
Differential Revision: D21421443
fbshipit-source-id: 4aa0a83a419592d9a0bb5d62486a1cdea9d73ce6
Summary:
This diff integrates the pulsar renderer source code into PyTorch3D as an alternative backend for the PyTorch3D point renderer. This diff is the first of a series of three diffs to complete that migration and focuses on the packaging and integration of the source code.
For more information about the pulsar backend, see the release notes and the paper (https://arxiv.org/abs/2004.07484). For information on how to use the backend, see the point cloud rendering notebook and the examples in the folder `docs/examples`.
Tasks addressed in the following diffs:
* Add the PyTorch3D interface,
* Add notebook examples and documentation (or adapt the existing ones to feature both interfaces).
Reviewed By: nikhilaravi
Differential Revision: D23947736
fbshipit-source-id: a5e77b53e6750334db22aefa89b4c079cda1b443
Summary: To initialize the Cameras class currently we require the principal point, focal length and other parameters to be specified from which we calculate the intrinsic matrix. In some cases the matrix might be directly available e.g. from a dataset and the associated metadata for an image.
Reviewed By: nikhilaravi
Differential Revision: D24489509
fbshipit-source-id: 1b411f19c5f6c8074bcfbf613f3339d5e242c119
Summary: This recently added test is sensitive to the version of PIL because of different algorithms to draw ellipses/circles. Remove it as there is no obvious safe way to test this. Replace with a test for the underlying centres_for_image().
Reviewed By: theschnitz
Differential Revision: D24622465
fbshipit-source-id: e46d7384df491c71ac87ba8bbbce89507ac40080
Summary: New methods to directly plot a TexturesUV map with its used points, using PIL and matplotlib.
Reviewed By: gkioxari
Differential Revision: D23782968
fbshipit-source-id: 692970857b5be13a35a3175dc82ac03963a73555
Summary: We can represent a rotation as a vector in the axis direction, whose length is the rotation anticlockwise in radians around that axis.
Reviewed By: gkioxari
Differential Revision: D24306293
fbshipit-source-id: 2e0f138eda8329f6cceff600a6e5f17a00e4deb7
Summary:
Small fix and updated tests for multigpu rendering case.
This resolves the issue seen in: https://github.com/facebookresearch/pytorch3d/issues/401
Reviewed By: gkioxari
Differential Revision: D24314681
fbshipit-source-id: 84c5a5359844c77518b48044001daa9a86f3c43a
Summary: Issue #119. The function `sqrt(max(x, 0))` is not convex and has infinite gradient at 0, but 0 is a subgradient at 0. Here we implement it in such a way as to give 0 as the gradient.
Reviewed By: gkioxari
Differential Revision: D24306294
fbshipit-source-id: 48d136faca083babad4d64970be7ea522dbe9e09
Summary:
Fix for GitHub issue #381.
The example mesh provided in the issue only had material properties but no texture image. The current implementation of texture atlassing generated an atlas using both the material properties and the texture image but only worked if there was a texture image and associated vertex uv coordinates. I have now modified the texture atlas creation so that it doesn't require an image and can work with materials which only have material properties.
Reviewed By: gkioxari
Differential Revision: D24153068
fbshipit-source-id: 63e9d325db09a84b336b83369d5342ce588a9932
Summary: Enhance every texture type with `faces_verts_textures_packed` that allows users to query the texture of each vertex in mesh
Reviewed By: nikhilaravi
Differential Revision: D24058778
fbshipit-source-id: 19d0e3a244fa96aae462c47bf52e07dfd3b7c6f0
Summary:
Enhanced `sample_points_from_meshes` with texture sampling
* This new feature is used to return textures corresponding to the sampled points in `sample_points_from_meshes`
Reviewed By: nikhilaravi
Differential Revision: D24031525
fbshipit-source-id: 8e5d8f784cc38aa391aa8e84e54423bd9fad7ad1
Summary:
Support for moving all the tensors of the renderer to another device by calling `renderer.to(new_device)`
Currently the `MeshRenderer`, `MeshRasterizer` and `SoftPhongShader` (and other shaders) are all of type `nn.Module` which already supports easily moving tensors of submodules (defined as class attributes) to a different device. However the class attributes of the rasterizer and shader (e.g. cameras, lights, materials), are of type `TensorProperties`, not nn.Module so we need to explicity create a `to` method to move these tensors to device. Note that the `TensorProperties` class already has a `to` method so we only need to call `cameras.to(device)` and don't need to worry about the internal tensors.
The other option is of course making these other classes (cameras, lights etc) also of type nn.Module.
Reviewed By: gkioxari
Differential Revision: D23885107
fbshipit-source-id: d71565c442181f739de4d797076ed5d00fb67f8e
Summary:
I'm constantly encountering 3D models with resources that have spaces in their filenames (especially on Windows) and therefore they can't be loaded in pytorch3d. Let me know what you think.
Thanks
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/358
Reviewed By: bottler
Differential Revision: D23798492
Pulled By: nikhilaravi
fbshipit-source-id: 4d85b7ee05339486d2e5ef53a531f8e6052251c5
Summary:
Make save_ply save to binary instead of ascii. An option makes the previous functionality available. save_ply's API accepts a stream, but this is undocumented; that stream must now be a binary stream not a text stream.
Avoiding warnings about making tensors from immutable numpy arrays.
Possible performance improvement when reading binary files.
Fix reading zero-length binary lists.
Reviewed By: nikhilaravi
Differential Revision: D22333118
fbshipit-source-id: b423dfd3da46e047bead200255f47a7707306811
Summary: Support rendering different color backgrounds for pointclouds for both compositors
Reviewed By: nikhilaravi
Differential Revision: D23611043
fbshipit-source-id: ab029650d51349340372c5bd66700e6577d48851
Summary: When the camera is vertically oriented, calculating the look_at x-axis (also known as the "right" vector) does not succeed, resulting in the x-axis being placed at the origin. Adds a check to correctly calculate the x-axis if this case occurs.
Reviewed By: nikhilaravi, sbranson
Differential Revision: D23511859
fbshipit-source-id: ee5145cdbecdbe2f7c7d288588bd0899480cb327
Summary:
This fixes two small issues with blending.py:softmax_rgb_blend():
1) zfar and znear attributes are propagated from the camera settings instead of just using default settings of znear=1.0 and zfar=100.0
2) A check is added to prevent arithmetic overflow in softmax_rgb_blend()
This is a fix in response to https://github.com/facebookresearch/pytorch3d/issues/334
where meshes rendererd using a SoftPhongShader with faces_per_pixel=1 appear black. This only occurs when the scale of the mesh is large (vertex values > 100, where 100 is the default value of zfar). This fix allows the caller to increase the value of cameras.zfar to match the scale of her/his mesh.
Reviewed By: nikhilaravi
Differential Revision: D23517541
fbshipit-source-id: ab8631ce9e5f2149f140b67b13eff857771b8807
Summary:
Add a join_scene method to all the textures to allow the join_mesh function to include textures. Rename the join_mesh function to join_meshes_as_scene.
For TexturesAtlas, we now interpolate if the user attempts to have the resolution vary across the batch. This doesn't look great if the resolution is already very low.
For TexturesUV, a rectangle packing function is required, this does something simple.
Reviewed By: gkioxari
Differential Revision: D23188773
fbshipit-source-id: c013db061a04076e13e90ccc168a7913e933a9c5
Summary:
Allow, and make default, align_corners=True for texture maps. Allow changing the padding_mode and set the default to be "border" which produces more logical results. Some new documentation.
The previous behavior corresponds to padding_mode="zeros" and align_corners=False.
Reviewed By: gkioxari
Differential Revision: D23268775
fbshipit-source-id: 58d6229baa591baa69705bcf97471c80ba3651de
Summary:
The look_at_view_transform did not give the correct results when the object location `at` was not (0,0,0).
The problem was on computing the cameras' location in world's coordinate `C`. It only took into account the camera position from spherical angles, but ignored the object location in the world's coordinate system. I simply modified the C tensor to take into account the object's location which is not necessarily in the origin.
I ran unit tests and all but 4 failed with the same error message: `RuntimeError: CUDA error: invalid device ordinal`. However the same happens before this patch, so I believe these errors are unrelated.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/230
Reviewed By: gkioxari
Differential Revision: D23278126
Pulled By: nikhilaravi
fbshipit-source-id: c06e891bc46de8222325ee7b37aa43cde44648e8
Summary:
- Add support for loading textures from ShapeNet Obj files as a texture atlas.
- Support textured rendering of shapenet models
Reviewed By: gkioxari
Differential Revision: D23141143
fbshipit-source-id: 26eb81758d4cdbd6d820b072b58f5c6c08cb90bc
Summary:
Found a bug in extending textures with vertex uv coordinates. This was due to the padded -> list conversion of vertex uv coordinates i.e. The number of vertices in the mesh and in verts_uvs can differ
e.g. if a vertex is shared between 3 faces, it can
have up to 3 different uv coordinates. Therefore we cannot convert directly from padded to list using _num_verts_per_mesh
Reviewed By: bottler
Differential Revision: D23233595
fbshipit-source-id: 0c66d15baae697ead0bdc384f74c27d4c6539fc9
Summary:
faces_uvs_packed and verts_uvs_packed were only used in one place and the definition of the former was ambiguous. This meant that the wrong coordinates could be used for meshes other than the first in the batch. I have therefore removed both functions and build their common result inline. Added a test that a simple batch of two meshes is rendered consistently with the rendering of each alone. This test would have failed before.
I hope this fixes https://github.com/facebookresearch/pytorch3d/issues/283.
Some other small improvements to the textures code.
Reviewed By: nikhilaravi
Differential Revision: D23161936
fbshipit-source-id: f99b560a46f6b30262e07028b049812bc04350a7
Summary: A triangle is culled if any vertex in a triangle is behind the camera. This fixes incorrect rendering of triangles that are partially behind the camera, where screen coordinate calculations are strange. It doesn't work for triangles that are partially behind the camera but still intersect with the view frustum.
Reviewed By: nikhilaravi
Differential Revision: D22856181
fbshipit-source-id: a9cbaa1327d89601b83d0dfd3e4a04f934a4a213
Summary:
Refactor cameras
* CamerasBase was enhanced with `transform_points_screen` that transforms projected points from NDC to screen space
* OpenGLPerspective, OpenGLOrthographic -> FoVPerspective, FoVOrthographic
* SfMPerspective, SfMOrthographic -> Perspective, Orthographic
* PerspectiveCamera can optionally be constructred with screen space parameters
* Note on Cameras and coordinate systems was added
Reviewed By: nikhilaravi
Differential Revision: D23168525
fbshipit-source-id: dd138e2b2cc7e0e0d9f34c45b8251c01266a2063
Summary:
Small fix to the softmax blending function.
To avoid overflow in the exponential for the softmax, the exponent is shifted by the maximum value. In the final calculation of the color there is a weighted sum between the pixel color and the background color - in order for the sum to be correct, the background color also needs to be handled in the same way witt the shifted exponent.
Reviewed By: gkioxari
Differential Revision: D23148301
fbshipit-source-id: 86066586ee7d3ce7bd4a2076b12ce191fbd151a7
Summary: The recently added part of a test was assuming that the random gpu was gpu 0.
Reviewed By: nikhilaravi
Differential Revision: D22948397
fbshipit-source-id: 88107e19fc3118e763f95be43a614941176a08f9
Summary:
A fairly big refactor of the texturing API with some breaking changes to how textures are defined.
Main changes:
- There are now 3 types of texture classes: `TexturesUV`, `TexturesAtlas` and `TexturesVertex`. Each class:
- has a `sample_textures` function which accepts the `fragments` from rasterization and returns `texels`. This means that the shaders will not need to know the type of the mesh texture which will resolve several issues people were reporting on GitHub.
- has a `join_batch` method for joining multiple textures of the same type into a batch
Reviewed By: gkioxari
Differential Revision: D21067427
fbshipit-source-id: 4b346500a60181e72fdd1b0dd89b5505c7a33926
Summary: Reduce the size of the data in this test, so that on circleci it doesn't run out of memory when pytorch (1.6) is used.
Reviewed By: gkioxari
Differential Revision: D22801490
fbshipit-source-id: 9591253c3d47430facd769a2c51a0b1722e0a305
Summary:
Sample/Get all views at the loading phase instead of returning phase;
Load only views from the split instead of all 24 views;
Test the numbers of views loaded are correct for each category.
Reviewed By: nikhilaravi
Differential Revision: D22631414
fbshipit-source-id: 1c5ce99fe2bdf6618c1aa0b69bb6899473376bc2
Summary:
1. CircleCI tests fail because of different randomisation. I was able to reproduce it on devfair (with an older version of pytorch3d though), but with a new threshold, it works. Let’s push and see if it will work in CircleCI.
2. Fixing linter’s issue with `l` variable name.
Reviewed By: bottler
Differential Revision: D22573244
fbshipit-source-id: 32cebc8981883a3411ed971eb4a617469376964d
Summary:
Added support for barycentric clipping in the C++/CUDA rasterization kernels which can be switched on/off via a rasterization setting.
Added tests and a benchmark to compare with the current implementation in PyTorch - for some cases of large image size/faces per pixel the cuda version is 10x faster.
Reviewed By: gkioxari
Differential Revision: D21705503
fbshipit-source-id: e835c0f927f1e5088ca89020aef5ff27ac3a8769
Summary:
C++/CUDA implementation of forward and backward passes for the sigmoid alpha blending function.
This is slightly faster than the vectorized implementation in Python, but more importantly uses less memory due to fewer tensors being created.
Reviewed By: gkioxari
Differential Revision: D19980671
fbshipit-source-id: 0779055d2c68b1f20fb0870e60046077ef4613ff
Summary: Adding a render function for R2N2.
Reviewed By: nikhilaravi
Differential Revision: D22230228
fbshipit-source-id: a9f588ddcba15bb5d8be1401f68d730e810b4251
Summary: Skeleton of R2N2 that for now only returns verts and faces extracted from ShapeNetCore v1.
Reviewed By: nikhilaravi
Differential Revision: D22203656
fbshipit-source-id: 00db6ac76bfdb76fdbc77a2087c34a3f0ff01e6a
Summary: Adding collate_batched_meshes for datasets.utils: takes in a list of dictionaries and merge them into one dictionary (while adding a merged mesh to the dictionary).
Reviewed By: nikhilaravi
Differential Revision: D22180404
fbshipit-source-id: f811f9a140f09638f355ad5739bffa6ee415819f
Summary: Additional functionality for renderer in ShapeNetCore: users can select which objects to render by specifying their model_ids, or users could choose to render several random objects in some categories, or users could specify indices of the objects in the loaded dataset. (currently doesn't support changing lighting, still investigating why lighting is causing instability in renderings)
Reviewed By: nikhilaravi
Differential Revision: D22179594
fbshipit-source-id: 74c49094ffa3ea2eb71de9451f9e5da5053d356d
Summary: Adding a renderer to ShapeNetCore (Note that the lights are currently turned off for the test; will investigate why lighting causes instability in rendering)
Reviewed By: nikhilaravi
Differential Revision: D22102673
fbshipit-source-id: a704756a1e93b61d5a879f0e5ee14ebcb0df49d7