Summary: Add an option to run tests without the OpenGL Renderer.
Reviewed By: patricklabatut
Differential Revision: D53573400
fbshipit-source-id: 54a14e7b2f156d24e0c561fdb279f4a9af01b793
Summary: Performance improvement: Use torch.lerp to map uv coordinates to the range needed for grid_sample (i.e. map [0, 1] to [-1, 1] and invert the y-axis)
Reviewed By: bottler
Differential Revision: D51961728
fbshipit-source-id: db19a5e3f482e9af7b96b20f88a1e5d0076dac43
Summary:
1) Update rasterizer/point rasterizer to accommodate fisheyecamera. Specifically, transform_points is in placement of explicit transform compositions.
2) In rasterizer unittests, update corresponding tests for rasterizer and point_rasterizer. Address comments to test fisheye against perspective camera when distortions are turned off.
3) Address comments to add end2end test for fisheyecameras. In test_render_meshes, fisheyecameras are added to camera enuerations whenever possible.
4) Test renderings with fisheyecameras of different params on cow mesh.
5) Use compositions for linear cameras whenever possible.
Reviewed By: kjchalup
Differential Revision: D38932736
fbshipit-source-id: 5b7074fc001f2390f4cf43c7267a8b37fd987547
Summary: Only import it if you ask for it.
Reviewed By: kjchalup
Differential Revision: D38327167
fbshipit-source-id: 3f05231f26eda582a63afc71b669996342b0c6f9
Summary:
Adding MeshRasterizerOpenGL, a faster alternative to MeshRasterizer. The new rasterizer follows the ideas from "Differentiable Surface Rendering via non-Differentiable Sampling".
The new rasterizer 20x faster on a 2M face mesh (try pose optimization on Nefertiti from https://www.cs.cmu.edu/~kmcrane/Projects/ModelRepository/!). The larger the mesh, the larger the speedup.
There are two main disadvantages:
* The new rasterizer works with an OpenGL backend, so requires pycuda.gl and pyopengl installed (though we avoided writing any C++ code, everything is in Python!)
* The new rasterizer is non-differentiable. However, you can still differentiate the rendering function if you use if with the new SplatterPhongShader which we recently added to PyTorch3D (see the original paper cited above).
Reviewed By: patricklabatut, jcjohnson
Differential Revision: D37698816
fbshipit-source-id: 54d120639d3cb001f096237807e54aced0acda25
Summary:
For 3D segmentation problems it's really useful to be able to train the models from multiple viewpoints using Pytorch3D as the renderer. Currently due to hardcoded assumptions in a few spots the mesh renderer only supports rendering RGB (3 dimensional) data. You can encode the classification information as 3 channel data but if you have more than 3 classes you're out of luck.
This relaxes the assumptions to make rendering semantic classes work with `HardFlatShader` and `AmbientLights` with no diffusion/specular. The other shaders/lights don't make any sense for classification since they mutate the texture values in some way.
This only requires changes in `Materials` and `AmbientLights`. The bulk of the code is the unit test.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1248
Test Plan: Added unit test that renders a 5 dimensional texture and compare dimensions 2-5 to a stored picture.
Reviewed By: bottler
Differential Revision: D37764610
Pulled By: d4l3k
fbshipit-source-id: 031895724d9318a6f6bab5b31055bb3f438176a5
Summary: Splatting shader. See code comments for details. Same API as SoftPhongShader.
Reviewed By: jcjohnson
Differential Revision: D36354301
fbshipit-source-id: 71ee37f7ff6bb9ce028ba42a65741424a427a92d
Summary:
Applies new import merging and sorting from µsort v1.0.
When merging imports, µsort will make a best-effort to move associated
comments to match merged elements, but there are known limitations due to
the diynamic nature of Python and developer tooling. These changes should
not produce any dangerous runtime changes, but may require touch-ups to
satisfy linters and other tooling.
Note that µsort uses case-insensitive, lexicographical sorting, which
results in a different ordering compared to isort. This provides a more
consistent sorting order, matching the case-insensitive order used when
sorting import statements by module name, and ensures that "frog", "FROG",
and "Frog" always sort next to each other.
For details on µsort's sorting and merging semantics, see the user guide:
https://usort.readthedocs.io/en/stable/guide.html#sorting
Reviewed By: bottler
Differential Revision: D35553814
fbshipit-source-id: be49bdb6a4c25264ff8d4db3a601f18736d17be1
Summary: As reported in https://github.com/facebookresearch/pytorch3d/pull/1100, _num_faces_per_mesh was changing in the source mesh in join_batch. This affects both TexturesUV and TexturesAtlas
Reviewed By: nikhilaravi
Differential Revision: D34643675
fbshipit-source-id: d67bdaca7278f18a76cfb15ba59d0ea85575bd36
Summary: Update all FB license strings to the new format.
Reviewed By: patricklabatut
Differential Revision: D33403538
fbshipit-source-id: 97a4596c5c888f3c54f44456dc07e718a387a02c
Summary:
All the renderers in PyTorch3D (pointclouds including pulsar, meshes, raysampling) use align_corners=False style. NDC space goes between the edges of the outer pixels. For a non square image with W>H, the vertical NDC space goes from -1 to 1 and the horizontal from -W/H to W/H.
However it was recently pointed out that functionality which deals with screen space inside the camera classes is inconsistent with this. It unintentionally uses align_corners=True. This fixes that.
This would change behaviour of the following:
- If you create a camera in screen coordinates, i.e. setting in_ndc=False, then anything you do with the camera which touches NDC space may be affected, including trying to use renderers. The transform_points_screen function will not be affected...
- If you call the function “transform_points_screen” on a camera defined in NDC space results will be different. I have illustrated in the diff how to get the old results from the new results but this probably isn’t the right long-term solution..
Reviewed By: gkioxari
Differential Revision: D32536305
fbshipit-source-id: 377325a9137282971dcb7ca11a6cba3fc700c9ce
Summary: Fix issue #826. This is a correction to the joining of TexturesUV into a single scene.
Reviewed By: nikhilaravi
Differential Revision: D30767092
fbshipit-source-id: 03ba6a1d2f22e569d1b3641cd13ddbb8dcb87ec7
Summary: Fix to resolve GitHub issue #796 - the cameras were being passed in the renderer forward pass instead of at initialization. The rasterizer was correctly using the cameras passed in the `kwargs` for the projection, but the `cameras` are still part of the `kwargs` for the `get_screen_to_ndc_transform` and `get_ndc_to_screen_transform` functions which is causing issues about duplicate arguments.
Reviewed By: bottler
Differential Revision: D30175679
fbshipit-source-id: 547e88d8439456e728fa2772722df5fa0fe4584d
Summary:
API fix for NDC/screen cameras and compatibility with PyTorch3D renderers.
With this new fix:
* Users can define cameras and `transform_points` under any coordinate system conventions. The transformation applies the camera K and RT to the input points, not regarding for PyTorch3D conventions. So this makes cameras completely independent from PyTorch3D renderer.
* Cameras can be defined either in NDC space or screen space. For existing ones, FoV cameras are in NDC space. Perspective/Orthographic can be defined in NDC or screen space.
* The interface with PyTorch3D renderers happens through `transform_points_ndc` which transforms points to the NDC space and assumes that input points are provided according to PyTorch3D conventions.
* Similarly, `transform_points_screen` transforms points to screen space and again assumes that input points are under PyTorch3D conventions.
* For Orthographic/Perspective cameras, if they are defined in screen space, the `get_ndc_camera_transform` allows points to be converted to NDC for use for the renderers.
Reviewed By: nikhilaravi
Differential Revision: D26932657
fbshipit-source-id: 1a964e3e7caa54d10c792cf39c4d527ba2fb2e79
Summary: Fixed multiple issues with shape broadcasting in lighting, shading and blending and updated the tests.
Reviewed By: bottler
Differential Revision: D28997941
fbshipit-source-id: d3ef93f979344076b1d9098a86178b4da63844c8
Summary: Updated the alpha channel in the `hard_rgb_blend` function to return the mask of the pixels which have overlapping mesh faces.
Reviewed By: bottler
Differential Revision: D29001604
fbshipit-source-id: 22a2173d769f2d3ad34892d68ceb628f073bca22
Summary:
Specific object to represent light which is 100% everywhere. Sometimes lighting is irrelevant, for example when viewing a mesh which has lighting already baked in.
This is not primarily aiming for a performance win but I think the test which has changed might be a bit faster.
Reviewed By: theschnitz
Differential Revision: D26405151
fbshipit-source-id: 82eae55de0bee918548a3eaf031b002cb95e726c
Summary:
If you join several meshes which have TexturesUV textures using join_meshes_as_scene then we amalgamate all the texture images in to a single one. This now checks if some of the images are equal (i.e. the tensors are the same tensor, in the `is` sense; they have the same `id` in Python) and only uses one copy if they are.
I have an example of a massive scene made of several textured meshes with some shared, where this makes the difference between fitting the data on the GPU and not.
Reviewed By: theschnitz
Differential Revision: D25982364
fbshipit-source-id: a8228805f38475c796302e27328a340d9b56c8ef
Summary: Simplify finding the data directories in the tests.
Reviewed By: nikhilaravi
Differential Revision: D27634293
fbshipit-source-id: dc308a7c86c41e6fae56a2ab58187c9f0335b575
Summary: Make common functions for finding directories where test data is found, instead of lots of tests using their own `__file__` while trying to get ./tests/data and the tutorials data.
Reviewed By: nikhilaravi
Differential Revision: D27633701
fbshipit-source-id: 1467bb6018cea16eba3cab097d713116d51071e9
Summary:
- Updated the C++/CUDA mesh rasterization kernels to handle the clipped faces. In particular this required careful handling of the distance calculation for faces which are cut into a quadrilateral by the image plane and then split into two sub triangles i.e. both sub triangles can't be part of the top K faces.
- Updated `rasterize_meshes.py` to use the utils functions to clip the meshes and convert the fragments back to in terms of the unclipped mesh
- Added end to end tests
Reviewed By: jcjohnson
Differential Revision: D26169685
fbshipit-source-id: d64cd0d656109b965f44a35c301b7c81f451cfa0
Summary: Small update to the cameras and rasterizer to correctly infer the type of camera (perspective vs orthographic).
Reviewed By: jcjohnson
Differential Revision: D26267225
fbshipit-source-id: a58ed3bc2ab25553d2a4307c734204c1d41b5176
Summary: Fixes the index out of bound errors for texture sampling from a texture atlas: when barycentric coordinates are 1.0, the integer index into the (R, R) per face texture map is R (max can only be R-1).
Reviewed By: gkioxari
Differential Revision: D25543803
fbshipit-source-id: 82d0935b981352b49c1d95d5a17f9cc88bad0a82
Summary: Users want to be able to obtain the depth from the renderer. Current work-around requires running the rasterizer and extra time. This change creates a new renderer class that also returns the fragments from the rasterizer.
Reviewed By: nikhilaravi
Differential Revision: D24432381
fbshipit-source-id: 6552e8a6bfee646791afb34bdb7452fbc4094aed
Summary:
Small fix and updated tests for multigpu rendering case.
This resolves the issue seen in: https://github.com/facebookresearch/pytorch3d/issues/401
Reviewed By: gkioxari
Differential Revision: D24314681
fbshipit-source-id: 84c5a5359844c77518b48044001daa9a86f3c43a
Summary:
Support for moving all the tensors of the renderer to another device by calling `renderer.to(new_device)`
Currently the `MeshRenderer`, `MeshRasterizer` and `SoftPhongShader` (and other shaders) are all of type `nn.Module` which already supports easily moving tensors of submodules (defined as class attributes) to a different device. However the class attributes of the rasterizer and shader (e.g. cameras, lights, materials), are of type `TensorProperties`, not nn.Module so we need to explicity create a `to` method to move these tensors to device. Note that the `TensorProperties` class already has a `to` method so we only need to call `cameras.to(device)` and don't need to worry about the internal tensors.
The other option is of course making these other classes (cameras, lights etc) also of type nn.Module.
Reviewed By: gkioxari
Differential Revision: D23885107
fbshipit-source-id: d71565c442181f739de4d797076ed5d00fb67f8e
Summary:
This fixes two small issues with blending.py:softmax_rgb_blend():
1) zfar and znear attributes are propagated from the camera settings instead of just using default settings of znear=1.0 and zfar=100.0
2) A check is added to prevent arithmetic overflow in softmax_rgb_blend()
This is a fix in response to https://github.com/facebookresearch/pytorch3d/issues/334
where meshes rendererd using a SoftPhongShader with faces_per_pixel=1 appear black. This only occurs when the scale of the mesh is large (vertex values > 100, where 100 is the default value of zfar). This fix allows the caller to increase the value of cameras.zfar to match the scale of her/his mesh.
Reviewed By: nikhilaravi
Differential Revision: D23517541
fbshipit-source-id: ab8631ce9e5f2149f140b67b13eff857771b8807
Summary:
Add a join_scene method to all the textures to allow the join_mesh function to include textures. Rename the join_mesh function to join_meshes_as_scene.
For TexturesAtlas, we now interpolate if the user attempts to have the resolution vary across the batch. This doesn't look great if the resolution is already very low.
For TexturesUV, a rectangle packing function is required, this does something simple.
Reviewed By: gkioxari
Differential Revision: D23188773
fbshipit-source-id: c013db061a04076e13e90ccc168a7913e933a9c5
Summary:
faces_uvs_packed and verts_uvs_packed were only used in one place and the definition of the former was ambiguous. This meant that the wrong coordinates could be used for meshes other than the first in the batch. I have therefore removed both functions and build their common result inline. Added a test that a simple batch of two meshes is rendered consistently with the rendering of each alone. This test would have failed before.
I hope this fixes https://github.com/facebookresearch/pytorch3d/issues/283.
Some other small improvements to the textures code.
Reviewed By: nikhilaravi
Differential Revision: D23161936
fbshipit-source-id: f99b560a46f6b30262e07028b049812bc04350a7
Summary:
Refactor cameras
* CamerasBase was enhanced with `transform_points_screen` that transforms projected points from NDC to screen space
* OpenGLPerspective, OpenGLOrthographic -> FoVPerspective, FoVOrthographic
* SfMPerspective, SfMOrthographic -> Perspective, Orthographic
* PerspectiveCamera can optionally be constructred with screen space parameters
* Note on Cameras and coordinate systems was added
Reviewed By: nikhilaravi
Differential Revision: D23168525
fbshipit-source-id: dd138e2b2cc7e0e0d9f34c45b8251c01266a2063
Summary:
A fairly big refactor of the texturing API with some breaking changes to how textures are defined.
Main changes:
- There are now 3 types of texture classes: `TexturesUV`, `TexturesAtlas` and `TexturesVertex`. Each class:
- has a `sample_textures` function which accepts the `fragments` from rasterization and returns `texels`. This means that the shaders will not need to know the type of the mesh texture which will resolve several issues people were reporting on GitHub.
- has a `join_batch` method for joining multiple textures of the same type into a batch
Reviewed By: gkioxari
Differential Revision: D21067427
fbshipit-source-id: 4b346500a60181e72fdd1b0dd89b5505c7a33926
Summary:
Added support for barycentric clipping in the C++/CUDA rasterization kernels which can be switched on/off via a rasterization setting.
Added tests and a benchmark to compare with the current implementation in PyTorch - for some cases of large image size/faces per pixel the cuda version is 10x faster.
Reviewed By: gkioxari
Differential Revision: D21705503
fbshipit-source-id: e835c0f927f1e5088ca89020aef5ff27ac3a8769
Summary: Adding a function in pytorch3d.structures.meshes to join multiple meshes into a Meshes object representing a single mesh. The function currently ignores all textures.
Reviewed By: nikhilaravi
Differential Revision: D21876908
fbshipit-source-id: 448602857e9d3d3f774d18bb4e93076f78329823
Summary: Adds support to hard_rgb_blend and hard blending shaders in shader.py (HardPhongShader, HardGouraudShader, and HardFlatShader) for changing the background color on which objects are rendered
Reviewed By: nikhilaravi
Differential Revision: D21746062
fbshipit-source-id: 08001200f4339d6a69c52405c6b8f4cac9f3f56e
Summary:
Fix a bug which resulted in a rendering artifacts if the image size was not a multiple of 16.
Fix: Revert coarse rasterization to original implementation and only update fine rasterization to reverse the ordering of Y and X axis. This is much simpler than the previous approach!
Additional changes:
- updated mesh rendering end-end tests to check outputs from both naive and coarse to fine rasterization.
- added pointcloud rendering end-end tests
Reviewed By: gkioxari
Differential Revision: D21102725
fbshipit-source-id: 2e7e1b013dd6dd12b3a00b79eb8167deddb2e89a