Summary: When using `sample_farthest_points` with `lengths`, it throws an error because of the device mismatch between `lengths` and `torch.rand(lengths.size())` on GPU.
Reviewed By: bottler
Differential Revision: D82378997
fbshipit-source-id: 8e929256177d543d1dd1249e8488f70e03e4101f
Summary: Some random seed changes. Skip multigpu tests when there's only one gpu. This is a better fix for what AI is doing in D80600882.
Reviewed By: MichaelRamamonjisoa
Differential Revision: D80625966
fbshipit-source-id: ac3952e7144125fd3a05ad6e4e6e5976ae10a8ef
Summary:
Optimizing sample_farthest_poinst by reducing CPU/GPU sync:
1. replacing iterative randint for starting indexes for 1 function call, if length is constant
2. Avoid sync in fetching maxumum of sample points, if we sample the same amount
3. Initializing 1 tensor for samples and indixes
compare
https://fburl.com/mlhub/7wk0xi98
Before
{F1980383703}
after
{F1980383707}
Histogram match pretty closely
{F1980464338}
Reviewed By: bottler
Differential Revision: D78731869
fbshipit-source-id: 060528ae7a1e0fbbd005d129c151eaf9405841de
Summary:
Fixes hard crashes (bus errors) when using MPS device (Apple Silicon) by implementing CPU checks throughout files in csrc subdirectories to check if on same mesh on a CPU device.
Note that this is the fourth and ultimate part of a larger change through multiple files & directories.
Reviewed By: bottler
Differential Revision: D77698176
fbshipit-source-id: 5bc9e3c5cea61afd486aed7396f390d92775ec6d
Summary:
Adds CHECK_CPU macros that checks if a tensor is on the CPU device throughout csrc directories and subdir up to `pulsar`.
Note that this is the third part of a larger change, and to keep diffs better organized, subsequent diffs will update the remaining directories.
Reviewed By: bottler
Differential Revision: D77696998
fbshipit-source-id: 470ca65b23d9965483b5bdd30c712da8e1131787
Summary:
Adds CHECK_CPU macros that checks if a tensor is on the CPU device throughout csrc directories up to `marching_cubes`. Directories updated include those in `gather_scatter`, `interp_face_attrs`, `iou_box3d`, `knn`, and `marching_cubes`.
Note that this is the second part of a larger change, and to keep diffs better organized, subsequent diffs will update the remaining directories.
Reviewed By: bottler
Differential Revision: D77558550
fbshipit-source-id: 762a0fe88548dc8d0901b198a11c40d0c36e173f
Summary:
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1986
Adds device checks to prevent crashes on unsupported devices in PyTorch3D. Updates the `pytorch3d_cutils.h` file to include new macro CHECK_CPU that checks if a tensor is on the CPU device. This macro is then used in the directories from `ball_query` to `face_area_normals` to ensure that tensors are not on unsupported devices like MPS.
Note that this is the first part of a larger change, and to keep diffs better organized, subsequent diffs will update the remaining directories.
Reviewed By: bottler
Differential Revision: D77473296
fbshipit-source-id: 13dc84620dee667bddebad1dade2d2cb5a59c737
Summary:
The current implementation of `matrix_to_quaternion` and `_sqrt_positive_part` uses boolean indexing, which can slow down performance and cause incompatibility with `torch.compile` unless `torch._dynamo.config.capture_dynamic_output_shape_ops` is set to `True`.
To enhance performance and compatibility, I recommend using `torch.gather` to select the best-conditioned quaternions and `F.relu` instead of `x>0` (bottler's suggestion)
For a detailed comparison of the implementation differences when using `torch.compile`, please refer to my Bento notebook
N7438339.
Reviewed By: bottler
Differential Revision: D77176230
fbshipit-source-id: 9a6a2e0015b5865056297d5f45badc3c425b93ce
Summary: Resolved self-assignment warnings in the `renderer.forward.device.h` file by removing redundant assignments of the `stream` variable to itself in `cub::DeviceSelect::Flagged` function calls. This change eliminates compiler errors and ensures cleaner, more efficient code execution.
Reviewed By: bottler
Differential Revision: D76554140
fbshipit-source-id: 28eae0186246f51a8ac8002644f184349aa49560
Summary:
I could not access https://github.com/NVlabs/cub/issues/172 to understand whether IntWrapper was still necessary but the comment is from 5 years ago and causes problems for the ROCm build.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1964
Reviewed By: MichaelRamamonjisoa
Differential Revision: D71937895
Pulled By: bottler
fbshipit-source-id: 5e0351e1bd8599b670436cd3464796eca33156f6
Summary:
CUDA kernel variables matching the type `(thread|block|grid).(Idx|Dim).(x|y|z)` [have the data type `uint`](https://docs.nvidia.com/cuda/cuda-c-programming-guide/#built-in-variables).
Many programmers mistakenly use implicit casts to turn these data types into `int`. In fact, the [CUDA Programming Guide](https://docs.nvidia.com/cuda/cuda-c-programming-guide/) it self is inconsistent and incorrect in its use of data types in programming examples.
The result of these implicit casts is that our kernels may give unexpected results when exposed to large datasets, i.e., those exceeding >~2B items.
While we now have linters in place to prevent simple mistakes (D71236150), our codebase has many problematic instances. This diff fixes some of them.
Reviewed By: dtolnay
Differential Revision: D71355356
fbshipit-source-id: cea44891416d9efd2f466d6c45df4e36008fa036
Summary:
A continuation of https://github.com/facebookresearch/pytorch3d/issues/1948 -- this commit fixes a small numerical issue with `matrix_to_axis_angle(..., fast=True)` near `pi`.
bottler feel free to check this out, it's a single-line change.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1953
Reviewed By: MichaelRamamonjisoa
Differential Revision: D70088251
Pulled By: bottler
fbshipit-source-id: 54cc7f946283db700cec2cd5575cf918456b7f32
Summary:
Remove headers flagged by facebook-unused-include-check over fbcode.vision.
+ format and autodeps
This is a codemod. It was automatically generated and will be landed once it is approved and tests are passing in sandcastle.
You have been added as a reviewer by Sentinel or Butterfly.
Autodiff project: uiv
Autodiff partition: fbcode.vision
Autodiff bookmark: ad.uiv.fbcode.vision
Reviewed By: dtolnay
Differential Revision: D70403619
fbshipit-source-id: d109c15774eeb3d809875f75fa2a26ed20d7f9a6
Summary:
This is an extension of https://github.com/facebookresearch/pytorch3d/issues/1544 with various speed, stability, and readability improvements. (I could not find a way to make a commit to the existing PR). This PR is still based on the [Rodrigues' rotation formula](https://en.wikipedia.org/wiki/Rotation_formalisms_in_three_dimensions#Rotation_matrix_%E2%86%94_Euler_axis/angle).
The motivation is the same; this change speeds up the conversions up to 10x, depending on the device, batch size, etc.
### Notes
- As the angles get very close to `π`, the existing implementation and the proposed one start to differ. However, (my understanding is that) this is not a problem as the axis can not be stably inferred from the rotation matrix in this case in general.
- bottler , I tried to follow similar conventions as existing functions to deal with weird angles, let me know if something needs to be changed to merge this.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1948
Reviewed By: MichaelRamamonjisoa
Differential Revision: D69193009
Pulled By: bottler
fbshipit-source-id: e5ed34b45b625114ec4419bb89e22a6aefad4eeb
Summary:
This is a somewhat not BC change: some None paths will be replaced by metadata paths, even when they were not used for data loading.
Moreover, removing the legacy fix to the paths in the old CO3D release.
Reviewed By: bottler
Differential Revision: D69048238
fbshipit-source-id: 2a8b26d7b9f5e2adf39c65888b5863a5a9de1996
Summary: Update Pytorch3D to be able to run assetgen (see later diffs in the stack)
Reviewed By: shapovalov
Differential Revision: D65942513
fbshipit-source-id: 1d01141c9f7e106608fa591be6e0d3262cb5944f
Summary: We did not often extend sequence-level metadata but now for applications like text-to-3D/video, we need to store captions and similar.
Reviewed By: bottler
Differential Revision: D68269926
fbshipit-source-id: f8af308adce51863d719a335d85cd2558943bd4c
Summary:
It is often easier to store the mask together with RGB, especially for renders. The logic in this diff:
* if load_mask and mask_path provided, take the mask from mask_path,
* otherwise, check if the image has the alpha channel and take it as a mask.
Reviewed By: antoinetlc
Differential Revision: D68160212
fbshipit-source-id: d9b6779f90027a4987ba96800983f441edff9c74
Summary: This function makes it easier to extend FrameData class with new channels; brushing it up a bit.
Reviewed By: bottler
Differential Revision: D67816470
fbshipit-source-id: 6575415c864d0f539e283889760cd2331bf226a7
Summary: Now that we have SQLAlchemy 2.0, we can fully use them.
Reviewed By: bottler
Differential Revision: D66920096
fbshipit-source-id: 25c0ea1c4f7361e66348035519627dc961b9e6e6
Summary:
Converts the directory specified to use the Ruff formatter in pyfmt
ruff_dog
If this diff causes merge conflicts when rebasing, please run
`hg status -n -0 --change . -I '**/*.{py,pyi}' | xargs -0 arc pyfmt`
on your diff, and amend any changes before rebasing onto latest.
That should help reduce or eliminate any merge conflicts.
allow-large-files
Reviewed By: bottler
Differential Revision: D66472063
fbshipit-source-id: 35841cb397e4f8e066e2159550d2f56b403b1bef
Summary:
- Hipified Pytorch Pulsar
- Created separate target for Pulsar tests and enabled RE testing
- Pytorch3D full test suite requires additional work like fixing EGL
dependencies on AMD
Reviewed By: danzimm
Differential Revision: D61339912
fbshipit-source-id: 0d10bc966e4de4a959f3834a386bad24e449dc1f
Summary: `c10::optional` is an alias for `std::optional`. Let's remove the alias and use the real thing.
Reviewed By: meyering
Differential Revision: D63402341
fbshipit-source-id: 241383e7ca4b2f3f1f9cac3af083056123dfd02b
Summary: `c10::optional` is an alias for `std::optional`. Let's remove the alias and use the real thing.
Reviewed By: palmje
Differential Revision: D63409387
fbshipit-source-id: fb6db59a14db9e897e2e6b6ad378f33bf2af86e8
Summary: these are failing in ci
Reviewed By: das-intensity
Differential Revision: D62594666
fbshipit-source-id: 5e3a7441be2978803dc2d3e361365e0fffa7ad3b
Summary:
Make the negative index actually not an error
fixes https://github.com/facebookresearch/pytorch3d/issues/1368
Reviewed By: das-intensity
Differential Revision: D62177991
fbshipit-source-id: e5ed433bde1f54251c4d4b6db073c029cbe87343
Summary:
Apparently pytorch 2.4 is now supported as per [this closed issue](https://github.com/facebookresearch/pytorch3d/issues/1863).
Added the `2.4.0` & `2.4.1` versions to `regenerate.py` then ran that as per the `README_fb.md` which generated `config.yml` changes.
Reviewed By: bottler
Differential Revision: D62517831
fbshipit-source-id: 002e276dfe2fa078136ff2f6c747d937abbadd1a
Summary:
X-link: https://github.com/pytorch/pytorch/pull/133343
X-link: https://github.com/fairinternal/pytorch3d/pull/45
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1851
Enables pytorch3d to build on AMD. An important part of enabling this was not compiling the Pulsar backend when the target is AMD. There are simply too many kernel incompatibilites to make it work (I tried haha). Fortunately, it doesnt seem like most modern applications of pytorch3d rely on Pulsar. We should be able to unlock most of pytorch3d's goodness on AMD without it.
Reviewed By: bottler, houseroad
Differential Revision: D61171993
fbshipit-source-id: fd4aee378a3568b22676c5bf2b727c135ff710af
Summary: To avoid the installation instructions for PyTorch3D becoming out-of-date, instead of specifying certain Python versions, update to just `Python`. Reader will understand it has to be a Python version compatible with GitHub.
Reviewed By: bottler
Differential Revision: D60919848
fbshipit-source-id: 5e974970a0db3d3d32fae44e5dd30cbc1ce237a9
Summary:
* Adds a "max" option for the point_reduction input to the
chamfer_distance function.
* When combining the x and y directions, maxes the losses instead
of summing them when point_reduction="max".
* Moves batch reduction to happen after the directions are
combined.
* Adds test_chamfer_point_reduction_max and
test_single_directional_chamfer_point_reduction_max tests.
Fixes https://github.com/facebookresearch/pytorch3d/issues/1838
Reviewed By: bottler
Differential Revision: D60614661
fbshipit-source-id: 7879816acfda03e945bada951b931d2c522756eb
Summary: This diff is fixing a backwards compatibility issue in PyTorch3D's dataset API. The code ensures that the `crop_bbox_xywh` attribute is set when box_crop flag is on. This is an implementation detail that people should not really use, however some people depend on this behaviour.
Reviewed By: bottler
Differential Revision: D59777449
fbshipit-source-id: b875e9eb909038b8629ccdade87661bb2c39d529
Summary: This is not actually needed and is causing a conda-forge confusion to do with python_abi - which needs users to have `-c conda-forge` when they install pytorch3d.
Reviewed By: patricklabatut
Differential Revision: D59587930
fbshipit-source-id: 961ae13a62e1b2b2ce6d8781db38bd97eca69e65
Summary: Problems with timeouts on old builds.
Reviewed By: MichaelRamamonjisoa
Differential Revision: D58819435
fbshipit-source-id: e1976534a102ad3841f3b297c772e916aeea12cb
Summary:
Currently, it is not possible to access a sub-transform using an indexer for all 3d transforms inheriting the `Transforms3d` class.
For instance:
```python
from pytorch3d import transforms
N = 10
r = transforms.random_rotations(N)
T = transforms.Transform3d().rotate(R=r)
R = transforms.Rotate(r)
x = T[0] # ok
x = R[0] # TypeError: __init__() got an unexpected keyword argument 'matrix'
```
This is because all these classes (namely `Rotate`, `Translate`, `Scale`, `RotateAxisAngle`) inherit the `__getitem__()` method from `Transform3d` which has the [following code on line 201](https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/transforms/transform3d.py#L201):
```python
return self.__class__(matrix=self.get_matrix()[index])
```
The four classes inheriting `Transform3d` are not initialized through a matrix argument, hence they error.
I propose to modify the `__getitem__()` method of the `Transform3d` class to fix this behavior. The least invasive way to do it I can think of consists of creating an empty instance of the current class, then setting the `_matrix` attribute manually. Thus, instead of
```python
return self.__class__(matrix=self.get_matrix()[index])
```
I propose to do:
```python
instance = self.__class__.__new__(self.__class__)
instance._matrix = self.get_matrix()[index]
return instance
```
As far as I can tell, this modification occurs no modification whatsoever for the user, except for the ability to index all 3d transforms.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1801
Reviewed By: MichaelRamamonjisoa
Differential Revision: D58410389
Pulled By: bottler
fbshipit-source-id: f371e4c63d2ae4c927a7ad48c2de8862761078de
Summary: Undoes the pytorch3d changes in D57294278 because they break builds for for PyTorch<2.1 .
Reviewed By: MichaelRamamonjisoa
Differential Revision: D57379779
fbshipit-source-id: 47a12511abcec4c3f4e2f62eff5ba99deb2fab4c
Summary:
Currently, it checks that the `2`th dimension of `p2` is the same size as the `2`th dimension of `p2` instead of `p1`.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1815
Reviewed By: MichaelRamamonjisoa
Differential Revision: D58586966
Pulled By: bottler
fbshipit-source-id: d4f723fa264f90fe368c10825c1acdfdc4c406dc
Summary: We can now move ray bundle to float dtype (e.g. from fp16 like types).
Reviewed By: bottler
Differential Revision: D57493109
fbshipit-source-id: 4e18a427e968b646fe5feafbff653811cd007981
Summary: `c10::optional` was switched to be `std::optional` after PyTorch moved to C++17. Let's eliminate `c10::optional`, if we can.
Reviewed By: albanD
Differential Revision: D57294278
fbshipit-source-id: f6f26133c43f8d92a4588f59df7d689e7909a0cd
Summary:
This diff removes a variable that was set, but which was not used.
LLVM-15 has a warning `-Wunused-but-set-variable` which we treat as an error because it's so often diagnostic of a code issue. Unused but set variables often indicate a programming mistake, but can also just be unnecessary cruft that harms readability and performance.
Removing this variable will not change how your code works, but the unused variable may indicate your code isn't working the way you thought it was. I've gone through each of these by hand, but mistakes may have slipped through. If you feel the diff needs changes before landing, **please commandeer** and make appropriate changes: there are hundreds of these and responding to them individually is challenging.
For questions/comments, contact r-barnes.
- If you approve of this diff, please use the "Accept & Ship" button :-)
Reviewed By: bottler
Differential Revision: D56886956
fbshipit-source-id: 0c515ed98b812b1c106a59e19ec90751ce32e8c0
Summary:
For larger N and Mi value (e.g. N=154, Mi=238) I notice list_to_packed() has become a bottleneck for my application. By removing the for loop and running on GPU, i see a 10-20 x speedup.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1737
Reviewed By: MichaelRamamonjisoa
Differential Revision: D54187993
Pulled By: bottler
fbshipit-source-id: 16399a24cb63b48c30460c7d960abef603b115d0
Summary:
adjusted sample_nums to match the number of columns in the image grid. It originally produced image grid with 5 axes but only 3 images and after this fix, the block would work as intended.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1768
Reviewed By: MichaelRamamonjisoa
Differential Revision: D55632872
Pulled By: bottler
fbshipit-source-id: 44d633a8068076889e49d49b8a7910dba0db37a7
Summary:
### Generalise tutorials' pip searching:
## Required Information:
This diff contains changes to several PyTorch3D tutorials.
**Purpose of this diff:**
Replace the current installation code with a more streamlined approach that tries to install the wheel first and falls back to installing from source if the wheel is not found.
**Why this diff is required:**
This diff makes it easier to cope with new PyTorch releases and reduce the need for manual intervention, as the current process involves checking the version of PyTorch in Colab and building a new wheel if it doesn't match the expected version, which generates additional work each time there is a a new PyTorch version in Colab.
**Changes:**
Before:
```
if torch.__version__.startswith("2.1.") and sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}"
])
!pip install fvcore iopath
!pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
else:
# We try to install PyTorch3D from source.
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
```
After:
```
pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}"
])
!pip install fvcore iopath
if sys.platform.startswith("linux"):
# We try to install PyTorch3D via a released wheel.
!pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
pip_list = !pip freeze
need_pytorch3d = not any(i.startswith("pytorch3d==") for i in pip_list)
if need_pytorch3d:
# We try to install PyTorch3D from source.
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
```
Reviewed By: bottler
Differential Revision: D55431832
fbshipit-source-id: a8de9162470698320241ae8401427dcb1ce17c37
Summary:
Fix an inclusive vs exclusive scan mix-up that was accidentally introduced when removing the Thrust dependency (`Thrust::exclusive_scan`) and reimplementing it using `at::cumsum` (which does an inclusive scan).
This fixes two Github reported issues:
* https://github.com/facebookresearch/pytorch3d/issues/1731
* https://github.com/facebookresearch/pytorch3d/issues/1751
Reviewed By: bottler
Differential Revision: D54605545
fbshipit-source-id: da9e92f3f8a9a35f7b7191428d0b9a9ca03e0d4d
Summary: The diff support colors in cubify for align = "center"
Reviewed By: bottler
Differential Revision: D53777011
fbshipit-source-id: ccb2bd1e3d89be3d1ac943eff08f40e50b0540d9
Summary: Add an option to run tests without the OpenGL Renderer.
Reviewed By: patricklabatut
Differential Revision: D53573400
fbshipit-source-id: 54a14e7b2f156d24e0c561fdb279f4a9af01b793
Summary:
Fixes https://github.com/facebookresearch/pytorch3d/issues/1641. The bug was caused by the mistaken downcasting of an int64_t into int, causing issues only on inputs large enough to have hashes that escaped the bounds of an int32.
Also added a test case for this issue.
Reviewed By: bottler
Differential Revision: D53505370
fbshipit-source-id: 0fdd0efc6d259cc3b0263e7ff3a4ab2c648ec521
Summary: This change updates the type of p2_idx from size_t to int64_t to address compiler warnings related to signed/unsigned comparison.
Reviewed By: bottler
Differential Revision: D52879393
fbshipit-source-id: de5484d78a907fccdaae3ce036b5e4a1a0a4de70
Summary: Fixed `get_rgbd_point_cloud` to take any number of image input channels.
Reviewed By: bottler
Differential Revision: D52796276
fbshipit-source-id: 3ddc0d1e337a6cc53fc86c40a6ddb136f036f9bc
Summary:
An OSS user has pointed out in https://github.com/facebookresearch/pytorch3d/issues/1703 that the output of matrix_to_quaternion (in that file) can be non standardized.
This diff solves the issue by adding a line of standardize at the end of the function
Reviewed By: bottler
Differential Revision: D52368721
fbshipit-source-id: c8d0426307fcdb7fd165e032572382d5ae360cde
Summary: Implement `submeshes` for TexturesUV. Fix what Meshes.submeshes passes to the texture's submeshes function to make this possible.
Reviewed By: bottler
Differential Revision: D52192060
fbshipit-source-id: 526734962e3376aaf75654200164cdcebfff6997
Summary: Performance improvement: Use torch.lerp to map uv coordinates to the range needed for grid_sample (i.e. map [0, 1] to [-1, 1] and invert the y-axis)
Reviewed By: bottler
Differential Revision: D51961728
fbshipit-source-id: db19a5e3f482e9af7b96b20f88a1e5d0076dac43
Summary: User confusion (https://github.com/facebookresearch/pytorch3d/issues/1579) about how zbuf is used for alpha compositing. Added small description and reference to paper to help give some context.
Reviewed By: bottler
Differential Revision: D51374933
fbshipit-source-id: 8c489a5b5d0a81f0d936c1348b9ade6787c39c9a
Summary: Fixes lint in test_render_points in the PyTorch3D library.
Differential Revision: D51289841
fbshipit-source-id: 1eae621eb8e87b0fe5979f35acd878944f574a6a
Summary:
When the ply format looks as follows:
```
comment TextureFile ***.png
element vertex 892
property double x
property double y
property double z
property double nx
property double ny
property double nz
property double texture_u
property double texture_v
```
`MeshPlyFormat` class will read uv from the ply file and read the uv map as commented as TextureFile.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1100
Reviewed By: MichaelRamamonjisoa
Differential Revision: D50885176
Pulled By: bottler
fbshipit-source-id: be75b1ec9a17a1ed87dbcf846a9072ea967aec37
Summary: Remove unused argument `mask_points` from `get_rgbd_point_cloud` and fix `get_implicitron_sequence_pointcloud`, which assumed it was used.
Reviewed By: MichaelRamamonjisoa
Differential Revision: D50885848
fbshipit-source-id: c0b834764ad5ef560107bd8eab04952d000489b8
Summary: I think we include more thrust than needed, and maybe removing it will help things like https://github.com/facebookresearch/pytorch3d/issues/1610 with DebugSyncStream errors on Windows.
Reviewed By: shapovalov
Differential Revision: D48949888
fbshipit-source-id: add889c0acf730a039dc9ffd6bbcc24ded20ef27
Summary: Python3 makes the use of `(object)` in class inheritance unnecessary. Let's modernize our code by eliminating this.
Reviewed By: itamaro
Differential Revision: D48673863
fbshipit-source-id: 032d6028371f0350252e6b731c74f0f5933c83cd
Summary:
The `chamfer_distance` function currently allows `"sum"` or `"mean"` reduction, but does not support returning unreduced (per-point) loss terms. Unreduced losses could be useful if the user wishes to inspect individual losses, or perform additional modifications to loss terms before reduction. One example would be implementing a robust kernel over the loss.
This PR adds a `None` option to the `point_reduction` parameter, similar to `batch_reduction`. In case of bi-directional chamfer loss, both the forward and backward distances are returned (a tuple of Tensors of shape `[D, N]` is returned). If normals are provided, similar logic applies to normals as well.
This PR addresses issue https://github.com/facebookresearch/pytorch3d/issues/622.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1605
Reviewed By: jcjohnson
Differential Revision: D48313857
Pulled By: bottler
fbshipit-source-id: 35c824827a143649b04166c4817449e1341b7fd9
Summary:
Something's wrong with recommonmark/CommonMark/six, let's see if this fixes it.
https://readthedocs.org/projects/pytorch3d/builds/21292632/
```
File "/home/docs/checkouts/readthedocs.org/user_builds/pytorch3d/envs/latest/lib/python3.11/site-packages/sphinx/config.py", line 368, in eval_config_file
execfile_(filename, namespace)
File "/home/docs/checkouts/readthedocs.org/user_builds/pytorch3d/envs/latest/lib/python3.11/site-packages/sphinx/util/pycompat.py", line 150, in execfile_
exec_(code, _globals)
File "/home/docs/checkouts/readthedocs.org/user_builds/pytorch3d/checkouts/latest/docs/conf.py", line 25, in <module>
from recommonmark.parser import CommonMarkParser
File "/home/docs/checkouts/readthedocs.org/user_builds/pytorch3d/envs/latest/lib/python3.11/site-packages/recommonmark/parser.py", line 6, in <module>
from CommonMark import DocParser, HTMLRenderer
File "/home/docs/checkouts/readthedocs.org/user_builds/pytorch3d/envs/latest/lib/python3.11/site-packages/CommonMark/__init__.py", line 3, in <module>
from CommonMark.CommonMark import HTMLRenderer
File "/home/docs/checkouts/readthedocs.org/user_builds/pytorch3d/envs/latest/lib/python3.11/site-packages/CommonMark/CommonMark.py", line 18, in <module>
HTMLunescape = html.parser.HTMLParser().unescape
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'HTMLParser' object has no attribute 'unescape'
```
Reviewed By: shapovalov
Differential Revision: D47471545
fbshipit-source-id: 48e121e20da535b3cc46b6bd2393d28869067b8b
Summary: New versions of cuda etc. I haven't committed recent changes to this for a while
Reviewed By: shapovalov
Differential Revision: D47396136
fbshipit-source-id: d6c27f5056fa8f4a74a628fa1d831159000acf55
Summary: This is needed from september 2023. As a side effect, implicitron docs should build better because typing.get_args exists etc.
Reviewed By: shapovalov
Differential Revision: D47363855
fbshipit-source-id: a954c5b81b1e5a4435fca146a11aea0d2ca96f45
Summary:
Blender uses OpenEXR to dump depth maps, so we have to support it.
OpenCV requires to explicitly accepth the vulnerabilities by setting the env var before exporting.
We can set it but I think it should be user’s responsibility.
OpenCV error reporting is adequate, so I don’t handle the error on our side.
Reviewed By: bottler
Differential Revision: D47403884
fbshipit-source-id: 2fcadd1df9d0efa0aea563bcfb2e3180b3c4d1d7
Summary:
For fg-masking depth, we assumed np.array but passed a Tensor; for defining the default depth_mask, vice versa.
Note that we change the intended behaviour for the latter, assuming that 0s are areas with empty depth. When loading depth masks, we replace NaNs with zeros, so it is sensible. It is not a BC change as that branch would crash if executed. Since there was no reports, I assume no one cared.
Reviewed By: bottler
Differential Revision: D47403588
fbshipit-source-id: 1094104176d7d767a5657b5bbc9f5a0cc9da0ede
Summary:
Convert ImplicitronRayBundle to a "classic" class instead of a dataclass. This change is introduced as a way to preserve the ImplicitronRayBundle interface while allowing two outcomes:
- init lengths arguments is now a Optional[torch.Tensor] instead of torch.Tensor
- lengths is now a property which returns a `torch.Tensor`. The lengths property will either recompute lengths from bins or return the stored _lengths. `_lenghts` is None if bins is set. It saves us a bit of memory.
Reviewed By: shapovalov
Differential Revision: D46686094
fbshipit-source-id: 3c75c0947216476ebff542b6f552d311024a679b
Summary:
## Context
Bins are used in mipnerf to allow to manipulate easily intervals. For example, by doing the following, `bins[..., :-1]` you will obtain all the left coordinates of your intervals, while doing `bins[..., 1:]` is equals to the right coordinates of your intervals.
We introduce here the support of bins like in MipNerf implementation.
## RayPointRefiner
Small changes have been made to modify RayPointRefiner.
- If bins is None
```
mids = torch.lerp(ray_bundle.lengths[..., 1:], ray_bundle.lengths[…, :-1], 0.5)
z_samples = sample_pdf(
mids, # [..., npt]
weights[..., 1:-1], # [..., npt - 1]
….
)
```
- If bins is not None
In the MipNerf implementation the sampling is done on all the bins. It allows us to use the full weights tensor without slashing it.
```
z_samples = sample_pdf(
ray_bundle.bins, # [..., npt + 1]
weights, # [..., npt]
...
)
```
## RayMarcher
Add a ray_deltas optional argument. If None, keep the same deltas computation from ray_lengths.
Reviewed By: shapovalov
Differential Revision: D46389092
fbshipit-source-id: d4f1963310065bd31c1c7fac1adfe11cbeaba606
Summary:
Add blurpool has defined in [MIP-NeRF](https://arxiv.org/abs/2103.13415).
It has been added has an option for RayPointRefiner.
Reviewed By: shapovalov
Differential Revision: D46356189
fbshipit-source-id: ad841bad86d2b591a68e1cb885d4f781cf26c111
Summary: Add a new implicit module Integral Position Encoding based on [MIP-NeRF](https://arxiv.org/abs/2103.13415).
Reviewed By: shapovalov
Differential Revision: D46352730
fbshipit-source-id: c6a56134c975d80052b3a11f5e92fd7d95cbff1e
Summary:
Introduce methods to approximate the radii of conical frustums along rays as described in [MipNerf](https://arxiv.org/abs/2103.13415):
- Two new attributes are added to ImplicitronRayBundle: bins and radii. Bins is of size n_pts_per_ray + 1. It allows us to manipulate easily and n_pts_per_ray intervals. For example we need the intervals coordinates in the radii computation for \(t_{\mu}, t_{\delta}\). Radii are used to store the radii of the conical frustums.
- Add 3 new methods to compute the radii:
- approximate_conical_frustum_as_gaussians: It computes the mean along the ray direction, the variance of the
conical frustum with respect to t and variance of the conical frustum with respect to its radius. This
implementation follows the stable computation defined in the paper.
- compute_3d_diagonal_covariance_gaussian: Will leverage the two previously computed variances to find the
diagonal covariance of the Gaussian.
- conical_frustum_to_gaussian: Mix everything together to compute the means and the diagonal covariances along
the ray of the Gaussians.
- In AbstractMaskRaySampler, introduces the attribute `cast_ray_bundle_as_cone`. If False it won't change the previous behaviour of the RaySampler. However if True, the samplers will sample `n_pts_per_ray +1` instead of `n_pts_per_ray`. This points are then used to set the bins attribute of ImplicitronRayBundle. The support of HeterogeneousRayBundle has not been added since the current code does not allow it. A safeguard has been added to avoid a silent bug in the future.
Reviewed By: shapovalov
Differential Revision: D45269190
fbshipit-source-id: bf22fad12d71d55392f054e3f680013aa0d59b78
Summary: We now use unittest.mock
Reviewed By: shapovalov
Differential Revision: D45868799
fbshipit-source-id: cd1042dc2c49c82c7b9e024f761c496049a31beb
Summary: Make test work in isolation, and when run internally make it not try the sqlalchemy files.
Reviewed By: shapovalov
Differential Revision: D46352513
fbshipit-source-id: 7417a25d7a5347d937631c9f56ae4e3242dd622e
Summary:
Hi,
Not sure this is the best fix. But while running this notebook, I only ever saw a blank canvas when trying to visualize the dolphin. It might be that I have a broken dependency, like plotly. I also don't know what the visualization is "supposed" to look like.
But incase other people have this issue, this one line change solved the whole problem for me. Now I have a happy, rotatable dolphin.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1549
Reviewed By: shapovalov
Differential Revision: D46350930
Pulled By: bottler
fbshipit-source-id: e19aa71eb05a93e2955262a2c90d1f0d09576228
Summary: Fix for https://github.com/facebookresearch/pytorch3d/issues/1441 where we were indexing with a tensor on the wrong device.
Reviewed By: shapovalov
Differential Revision: D46276449
fbshipit-source-id: 7750ed45ffecefa5d291fd1eadfe515310c2cf0d
Summary: Making it easier for the clients to use these datasets.
Reviewed By: bottler
Differential Revision: D46727179
fbshipit-source-id: cf619aee4c4c0222a74b30ea590cf37f08f014cc
Summary: In D42739669, I forgot to update the API of existing implementations of DatasetBase to take `subset_filter`. Looks like only one was missing.
Reviewed By: bottler
Differential Revision: D46724488
fbshipit-source-id: 13ab7a457f853278cf06955aad0cc2bab5fbcce6
Summary:
Adds stratified sampling of sequences within categories applied after category / sequence filters but before the num sequence limit.
It respects the insertion order into the sequence_annots table, i.e. takes top N sequences within each category.
Reviewed By: bottler
Differential Revision: D46724002
fbshipit-source-id: 597cb2a795c3f3bc07f838fc51b4e95a4f981ad3
Summary: Single directional chamfer distance and option to use non-absolute cosine similarity
Reviewed By: bottler
Differential Revision: D46593980
fbshipit-source-id: b2e591706a0cdde1c2d361614cecebb84a581433
Summary: Fine implicit function was called before the coarse implicit function.
Reviewed By: shapovalov
Differential Revision: D46224224
fbshipit-source-id: 6b1cc00cc823d3ea7a5b42774c9ec3b73a69edb5
Summary:
1. We may need to store arrays of unknown shape in the database. It implements and tests serialisation.
2. Previously, when an inexisting metadata file was passed to SqlIndexDataset, it would try to open it and create an empty file, then crash. We now open the file in a read-only mode, so the error message is more intuitive. Note that the implementation is SQLite specific.
Reviewed By: bottler
Differential Revision: D46047857
fbshipit-source-id: 3064ae4f8122b4fc24ad3d6ab696572ebe8d0c26
Summary: I don't know why RE tests sometimes fail here, but maybe it's a race condition. If that's right, this should fix it.
Reviewed By: shapovalov
Differential Revision: D46020054
fbshipit-source-id: 20b746b09ad9bd77c2601ac681047ccc6cc27ed9
Summary:
This is mostly a refactoring diff to reduce friction in extending the frame data.
Slight functional changes: dataset getitem now accepts (seq_name, frame_number_as_singleton_tensor) as a non-advertised feature. Otherwise this code crashes:
```
item = dataset[0]
dataset[item.sequence_name, item.frame_number]
```
Reviewed By: bottler
Differential Revision: D45780175
fbshipit-source-id: 75b8e8d3dabed954a804310abdbd8ab44a8dea29
Summary: We don't want to use print directly in stats.print() method. Instead this method will return the output string to the caller.
Reviewed By: shapovalov
Differential Revision: D45356240
fbshipit-source-id: 2cabe3cdfb9206bf09aa7b3cdd2263148a5ba145
Summary: Drop support for PyTorch 1.9.0 and 1.9.1.
Reviewed By: shapovalov
Differential Revision: D45704329
fbshipit-source-id: c0fe3ecf6a1eb9bcd4163785c0cb4bf4f5060f50
Summary:
typing.NamedTuple was simplified in 3.10
These two fields were the same in 3.8, so this should be a no-op
#buildmore
Reviewed By: bottler
Differential Revision: D45373526
fbshipit-source-id: 2b26156f5f65b7be335133e9e705730f7254260d
Summary:
Although we can load per-vertex normals in `load_obj`, saving per-vertex normals is not supported in `save_obj`.
This patch fixes this by allowing passing per-vertex normal data in `save_obj`:
``` python
def save_obj(
f: PathOrStr,
verts,
faces,
decimal_places: Optional[int] = None,
path_manager: Optional[PathManager] = None,
*,
verts_normals: Optional[torch.Tensor] = None,
faces_normals: Optional[torch.Tensor] = None,
verts_uvs: Optional[torch.Tensor] = None,
faces_uvs: Optional[torch.Tensor] = None,
texture_map: Optional[torch.Tensor] = None,
) -> None:
"""
Save a mesh to an .obj file.
Args:
f: File (str or path) to which the mesh should be written.
verts: FloatTensor of shape (V, 3) giving vertex coordinates.
faces: LongTensor of shape (F, 3) giving faces.
decimal_places: Number of decimal places for saving.
path_manager: Optional PathManager for interpreting f if
it is a str.
verts_normals: FloatTensor of shape (V, 3) giving the normal per vertex.
faces_normals: LongTensor of shape (F, 3) giving the index into verts_normals
for each vertex in the face.
verts_uvs: FloatTensor of shape (V, 2) giving the uv coordinate per vertex.
faces_uvs: LongTensor of shape (F, 3) giving the index into verts_uvs for
each vertex in the face.
texture_map: FloatTensor of shape (H, W, 3) representing the texture map
for the mesh which will be saved as an image. The values are expected
to be in the range [0, 1],
"""
```
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1511
Reviewed By: shapovalov
Differential Revision: D45086045
Pulled By: bottler
fbshipit-source-id: 666efb0d2c302df6cf9f2f6601d83a07856bf32f
Summary:
If my understanding is right, prp_screen[1] should be 32 rather than 48.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1501
Reviewed By: shapovalov
Differential Revision: D45044406
Pulled By: bottler
fbshipit-source-id: 7dd93312db4986f4701e642ba82d94333466b921
Summary:
I forgot to include these tests to D45086611 when transferring code from pixar_replay repo.
They test the new ORM types used in SQL dataset and are SQL Alchemy 2.0 specific.
An important test for extending types is a proof of concept for generality of SQL Dataset. The idea is to extend FrameAnnotation and FrameData in parallel.
Reviewed By: bottler
Differential Revision: D45529284
fbshipit-source-id: 2a634e518f580c312602107c85fc320db43abcf5
Summary:
Added a suit of functions and code additions to experimental_gltf_io.py file to enable saving Meshes in TexturesVertex format into .glb file.
Also added a test to tets_io_gltf.py to check the functionality with the test described in Test Plane.
Reviewed By: bottler
Differential Revision: D44969144
fbshipit-source-id: 9ce815a1584b510442fa36cc4dbc8d41cc3786d5
Summary: Remove the need of tuple and reversed in the raysampling xy_grid computation
Reviewed By: bottler
Differential Revision: D45269342
fbshipit-source-id: d0e4c0923b9a2cca674b35e8d64862043a0eab3b
Summary:
Moving SQL dataset to PyTorch3D. It has been extensively tested in pixar_replay.
It requires SQLAlchemy 2.0, which is not supported in fbcode. So I exclude the sources and tests that depend on it from buck TARGETS.
Reviewed By: bottler
Differential Revision: D45086611
fbshipit-source-id: 0285f03e5824c0478c70ad13731525bb5ec7deef
Summary:
We currently support caching bounding boxes in MaskAnnotation. If present, they are not re-computed from the mask. However, the masks need to be loaded for the bbox to be set.
This diff fixes that. Even if load_masks / load_blobs are unset, the bounding box can be picked up from the metadata.
Reviewed By: bottler
Differential Revision: D45144918
fbshipit-source-id: 8a2e2c115e96070b6fcdc29cbe57e1cee606ddcd
Summary: The code does not crash if depth map/mask are not given.
Reviewed By: bottler
Differential Revision: D45082985
fbshipit-source-id: 3610d8beb4ac897fbbe52f56a6dd012a6365b89b
Summary:
The pattern
```
X.Y if hasattr(X, "Y") else Z
```
can be replaced with
```
getattr(X, "Y", Z)
```
The [getattr](https://www.w3schools.com/python/ref_func_getattr.asp) function gives more succinct code than the [hasattr](https://www.w3schools.com/python/ref_func_hasattr.asp) function. Please use it when appropriate.
**This diff is very low risk. Green tests indicate that you can safely Accept & Ship.**
Reviewed By: bottler
Differential Revision: D44886893
fbshipit-source-id: 86ba23e837217e1ebd64bf8e27d286257894839e
The core library is written in PyTorch. Several components have underlying implementation in CUDA for improved performance. A subset of these components have CPU implementations in C++/PyTorch. It is advised to use PyTorch3D with GPU support in order to use all the features.
@@ -146,6 +146,12 @@ If you are using the pulsar backend for sphere-rendering (the `PulsarPointRender
Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under [`Releases`](https://github.com/facebookresearch/pytorch3d/releases), and the builds can be installed using `conda` as per the instructions in [INSTALL.md](INSTALL.md).
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.