Summary:
Converts the directory specified to use the Ruff formatter in pyfmt
ruff_dog
If this diff causes merge conflicts when rebasing, please run
`hg status -n -0 --change . -I '**/*.{py,pyi}' | xargs -0 arc pyfmt`
on your diff, and amend any changes before rebasing onto latest.
That should help reduce or eliminate any merge conflicts.
allow-large-files
Reviewed By: bottler
Differential Revision: D66472063
fbshipit-source-id: 35841cb397e4f8e066e2159550d2f56b403b1bef
Summary:
Introduce methods to approximate the radii of conical frustums along rays as described in [MipNerf](https://arxiv.org/abs/2103.13415):
- Two new attributes are added to ImplicitronRayBundle: bins and radii. Bins is of size n_pts_per_ray + 1. It allows us to manipulate easily and n_pts_per_ray intervals. For example we need the intervals coordinates in the radii computation for \(t_{\mu}, t_{\delta}\). Radii are used to store the radii of the conical frustums.
- Add 3 new methods to compute the radii:
- approximate_conical_frustum_as_gaussians: It computes the mean along the ray direction, the variance of the
conical frustum with respect to t and variance of the conical frustum with respect to its radius. This
implementation follows the stable computation defined in the paper.
- compute_3d_diagonal_covariance_gaussian: Will leverage the two previously computed variances to find the
diagonal covariance of the Gaussian.
- conical_frustum_to_gaussian: Mix everything together to compute the means and the diagonal covariances along
the ray of the Gaussians.
- In AbstractMaskRaySampler, introduces the attribute `cast_ray_bundle_as_cone`. If False it won't change the previous behaviour of the RaySampler. However if True, the samplers will sample `n_pts_per_ray +1` instead of `n_pts_per_ray`. This points are then used to set the bins attribute of ImplicitronRayBundle. The support of HeterogeneousRayBundle has not been added since the current code does not allow it. A safeguard has been added to avoid a silent bug in the future.
Reviewed By: shapovalov
Differential Revision: D45269190
fbshipit-source-id: bf22fad12d71d55392f054e3f680013aa0d59b78
Summary: Use IndexError so that a camera object is an iterable
Reviewed By: shapovalov
Differential Revision: D42312021
fbshipit-source-id: 67c417d5f1398e8b30a6944468eda057b4ceb444
Summary:
User reported that cloned cameras fail to save. The error with latest PyTorch is
```
pickle.PicklingError: Can't pickle ~T_destination: attribute lookup T_destination on torch.nn.modules.module failed
```
This fixes it.
Reviewed By: btgraham
Differential Revision: D39692258
fbshipit-source-id: 75bbf3b8dfa0023dc28bf7d4cc253ca96e46a64d
Summary:
Amend FisheyeCamera by adding tests for all
combination of params and for different batch_sizes.
Reviewed By: kjchalup
Differential Revision: D39176747
fbshipit-source-id: 830d30da24beeb2f0df52db0b17a4303ed53b59c
Summary: Address comments to add benchmarkings for cameras and the new fisheye cameras. The dependency functions in test_cameras have been updated in Diff 1. The following two snapshots show benchmarking results.
Reviewed By: kjchalup
Differential Revision: D38991914
fbshipit-source-id: 51fe9bb7237543e4ee112c9f5068a4cf12a9d482
Summary:
1. A Fisheye camera model that generalizes pinhole camera by considering distortions (i.e. radial, tangential and thin-prism distortions).
2. Added tests against perspective cameras when distortions are off and Aria data points when distortions are on.
3. Address comments to test unhandled shapes between points and transforms. Added tests for __FIELDS, shape broadcasts, cuda etc.
4. Address earlier comments for code efficiency (e.g., adopted torch.norm; torch.solve for matrix inverse; removed inplace operations; unnecessary clone; expand in place of repeat etc).
Reviewed By: jcjohnson
Differential Revision: D38407094
fbshipit-source-id: a3ab48c85c496ac87af692d5d461bb3fc2a2db13
Summary:
Applies new import merging and sorting from µsort v1.0.
When merging imports, µsort will make a best-effort to move associated
comments to match merged elements, but there are known limitations due to
the diynamic nature of Python and developer tooling. These changes should
not produce any dangerous runtime changes, but may require touch-ups to
satisfy linters and other tooling.
Note that µsort uses case-insensitive, lexicographical sorting, which
results in a different ordering compared to isort. This provides a more
consistent sorting order, matching the case-insensitive order used when
sorting import statements by module name, and ensures that "frog", "FROG",
and "Frog" always sort next to each other.
For details on µsort's sorting and merging semantics, see the user guide:
https://usort.readthedocs.io/en/stable/guide.html#sorting
Reviewed By: bottler
Differential Revision: D35553814
fbshipit-source-id: be49bdb6a4c25264ff8d4db3a601f18736d17be1
Summary:
Function to join a list of cameras objects into a single batched object.
FB: In the next diff I will remove the `concatenate_cameras` function in implicitron and update the callsites.
Reviewed By: nikhilaravi
Differential Revision: D33198209
fbshipit-source-id: 0c9f5f5df498a0def9dba756c984e6a946618158
Summary:
Added a custom `__getitem__` method to `CamerasBase` which returns an instance of the appropriate camera instead of the `TensorAccessor` class.
Long term we should deprecate the `TensorAccessor` and the `__getitem__` method on `TensorProperties`
FB: In the next diff I will update the uses of `select_cameras` in implicitron.
Reviewed By: bottler
Differential Revision: D33185885
fbshipit-source-id: c31995d0eb126981e91ba61a6151d5404b263f67
Summary: Fix some comments to match the recent change to transform_points_screen.
Reviewed By: patricklabatut
Differential Revision: D33243697
fbshipit-source-id: dc8d182667a9413bca2c2e3657f97b2f7a47c795
Summary:
All the renderers in PyTorch3D (pointclouds including pulsar, meshes, raysampling) use align_corners=False style. NDC space goes between the edges of the outer pixels. For a non square image with W>H, the vertical NDC space goes from -1 to 1 and the horizontal from -W/H to W/H.
However it was recently pointed out that functionality which deals with screen space inside the camera classes is inconsistent with this. It unintentionally uses align_corners=True. This fixes that.
This would change behaviour of the following:
- If you create a camera in screen coordinates, i.e. setting in_ndc=False, then anything you do with the camera which touches NDC space may be affected, including trying to use renderers. The transform_points_screen function will not be affected...
- If you call the function “transform_points_screen” on a camera defined in NDC space results will be different. I have illustrated in the diff how to get the old results from the new results but this probably isn’t the right long-term solution..
Reviewed By: gkioxari
Differential Revision: D32536305
fbshipit-source-id: 377325a9137282971dcb7ca11a6cba3fc700c9ce
Summary:
API fix for NDC/screen cameras and compatibility with PyTorch3D renderers.
With this new fix:
* Users can define cameras and `transform_points` under any coordinate system conventions. The transformation applies the camera K and RT to the input points, not regarding for PyTorch3D conventions. So this makes cameras completely independent from PyTorch3D renderer.
* Cameras can be defined either in NDC space or screen space. For existing ones, FoV cameras are in NDC space. Perspective/Orthographic can be defined in NDC or screen space.
* The interface with PyTorch3D renderers happens through `transform_points_ndc` which transforms points to the NDC space and assumes that input points are provided according to PyTorch3D conventions.
* Similarly, `transform_points_screen` transforms points to screen space and again assumes that input points are under PyTorch3D conventions.
* For Orthographic/Perspective cameras, if they are defined in screen space, the `get_ndc_camera_transform` allows points to be converted to NDC for use for the renderers.
Reviewed By: nikhilaravi
Differential Revision: D26932657
fbshipit-source-id: 1a964e3e7caa54d10c792cf39c4d527ba2fb2e79
Summary: Deprecate the `so3_exponential_map()` function in favor of its alias `so3_exp_map()`: this aligns with the naming of `so3_log_map()` and the recently introduced `se3_exp_map()` / `se3_log_map()` pair.
Reviewed By: bottler
Differential Revision: D29329966
fbshipit-source-id: b6f60b9e86b2995f70b1fbeb16f9feea05c55de9
Summary: Small update to the cameras and rasterizer to correctly infer the type of camera (perspective vs orthographic).
Reviewed By: jcjohnson
Differential Revision: D26267225
fbshipit-source-id: a58ed3bc2ab25553d2a4307c734204c1d41b5176
Summary: Allowing usort, isort and black to coexist without fighting means we can't have imports commented as deprecated from the same module as other imports.
Reviewed By: nikhilaravi
Differential Revision: D25372970
fbshipit-source-id: 637f5a0025c0df9fbec47cba73ce5387f4f8b467
Summary: To initialize the Cameras class currently we require the principal point, focal length and other parameters to be specified from which we calculate the intrinsic matrix. In some cases the matrix might be directly available e.g. from a dataset and the associated metadata for an image.
Reviewed By: nikhilaravi
Differential Revision: D24489509
fbshipit-source-id: 1b411f19c5f6c8074bcfbf613f3339d5e242c119
Summary: When the camera is vertically oriented, calculating the look_at x-axis (also known as the "right" vector) does not succeed, resulting in the x-axis being placed at the origin. Adds a check to correctly calculate the x-axis if this case occurs.
Reviewed By: nikhilaravi, sbranson
Differential Revision: D23511859
fbshipit-source-id: ee5145cdbecdbe2f7c7d288588bd0899480cb327
Summary:
The look_at_view_transform did not give the correct results when the object location `at` was not (0,0,0).
The problem was on computing the cameras' location in world's coordinate `C`. It only took into account the camera position from spherical angles, but ignored the object location in the world's coordinate system. I simply modified the C tensor to take into account the object's location which is not necessarily in the origin.
I ran unit tests and all but 4 failed with the same error message: `RuntimeError: CUDA error: invalid device ordinal`. However the same happens before this patch, so I believe these errors are unrelated.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/230
Reviewed By: gkioxari
Differential Revision: D23278126
Pulled By: nikhilaravi
fbshipit-source-id: c06e891bc46de8222325ee7b37aa43cde44648e8
Summary:
Refactor cameras
* CamerasBase was enhanced with `transform_points_screen` that transforms projected points from NDC to screen space
* OpenGLPerspective, OpenGLOrthographic -> FoVPerspective, FoVOrthographic
* SfMPerspective, SfMOrthographic -> Perspective, Orthographic
* PerspectiveCamera can optionally be constructred with screen space parameters
* Note on Cameras and coordinate systems was added
Reviewed By: nikhilaravi
Differential Revision: D23168525
fbshipit-source-id: dd138e2b2cc7e0e0d9f34c45b8251c01266a2063
Summary:
Ran the linter.
TODO: need to update the linter as per D21353065.
Reviewed By: bottler
Differential Revision: D21362270
fbshipit-source-id: ad0e781de0a29f565ad25c43bc94a19b1828c020
Summary: Made a CameraBase class. Added `unproject_points` method for each camera class.
Reviewed By: nikhilaravi
Differential Revision: D20373602
fbshipit-source-id: 7e3da5ae420091b5fcab400a9884ef29ad7a7343
Summary: use assertClose in some tests, which enforces shape equality. Fixes some small problems, including graph_conv on an empty graph.
Reviewed By: nikhilaravi
Differential Revision: D20556912
fbshipit-source-id: 60a61eafe3c03ce0f6c9c1a842685708fb10ac5b
Summary:
Create extrinsic parameters from eye point.
Create the rotation and translation from an eye point, look-at point and up vector.
see:
https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/gluLookAt.xml
It is arguably easier to initialise a camera position as a point in the world rather than an angle.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/65
Reviewed By: bottler
Differential Revision: D20419652
Pulled By: nikhilaravi
fbshipit-source-id: 9caa1330860bb8bde1fb5c3864ed4cde836a5d19
Summary: The shebang line `#!<path to interpreter>` is only required for Python scripts, so remove it on source files for class or function definitions. Additionally explicitly mark as executable the actual Python scripts in the codebase.
Reviewed By: nikhilaravi
Differential Revision: D20095778
fbshipit-source-id: d312599fba485e978a243292f88a180d71e1b55a
Summary:
## Updates
- Defined the world and camera coordinates according to this figure. The world coordinates are defined as having +Y up, +X left and +Z in.
{F230888499}
- Removed all flipping from blending functions.
- Updated the rasterizer to return images with +Y up and +X left.
- Updated all the mesh rasterizer tests
- The expected values are now defined in terms of the default +Y up, +X left
- Added tests where the triangles in the meshes are non symmetrical so that it is clear which direction +X and +Y are
## Questions:
- Should we have **scene settings** instead of raster settings?
- To be more correct we should be [z clipping in the rasterizer based on the far/near clipping planes](https://github.com/ShichenLiu/SoftRas/blob/master/soft_renderer/cuda/soft_rasterize_cuda_kernel.cu#L400) - these values are also required in the blending functions so should we make these scene level parameters and have a scene settings tuple which is available to the rasterizer and shader?
Reviewed By: gkioxari
Differential Revision: D20208604
fbshipit-source-id: 55787301b1bffa0afa9618f0a0886cc681da51f3