Summary:
* Adds a "max" option for the point_reduction input to the
chamfer_distance function.
* When combining the x and y directions, maxes the losses instead
of summing them when point_reduction="max".
* Moves batch reduction to happen after the directions are
combined.
* Adds test_chamfer_point_reduction_max and
test_single_directional_chamfer_point_reduction_max tests.
Fixes https://github.com/facebookresearch/pytorch3d/issues/1838
Reviewed By: bottler
Differential Revision: D60614661
fbshipit-source-id: 7879816acfda03e945bada951b931d2c522756eb
Summary:
The `chamfer_distance` function currently allows `"sum"` or `"mean"` reduction, but does not support returning unreduced (per-point) loss terms. Unreduced losses could be useful if the user wishes to inspect individual losses, or perform additional modifications to loss terms before reduction. One example would be implementing a robust kernel over the loss.
This PR adds a `None` option to the `point_reduction` parameter, similar to `batch_reduction`. In case of bi-directional chamfer loss, both the forward and backward distances are returned (a tuple of Tensors of shape `[D, N]` is returned). If normals are provided, similar logic applies to normals as well.
This PR addresses issue https://github.com/facebookresearch/pytorch3d/issues/622.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1605
Reviewed By: jcjohnson
Differential Revision: D48313857
Pulled By: bottler
fbshipit-source-id: 35c824827a143649b04166c4817449e1341b7fd9
Summary: Single directional chamfer distance and option to use non-absolute cosine similarity
Reviewed By: bottler
Differential Revision: D46593980
fbshipit-source-id: b2e591706a0cdde1c2d361614cecebb84a581433
Summary: Fix divide by zero for empty pointcloud in chamfer. Also for empty batches. In process, needed to regularize num_points_per_cloud for empty batches.
Reviewed By: kjchalup
Differential Revision: D36311330
fbshipit-source-id: 3378ab738bee77ecc286f2110a5c8dc445960340
Summary:
Applies new import merging and sorting from µsort v1.0.
When merging imports, µsort will make a best-effort to move associated
comments to match merged elements, but there are known limitations due to
the diynamic nature of Python and developer tooling. These changes should
not produce any dangerous runtime changes, but may require touch-ups to
satisfy linters and other tooling.
Note that µsort uses case-insensitive, lexicographical sorting, which
results in a different ordering compared to isort. This provides a more
consistent sorting order, matching the case-insensitive order used when
sorting import statements by module name, and ensures that "frog", "FROG",
and "Frog" always sort next to each other.
For details on µsort's sorting and merging semantics, see the user guide:
https://usort.readthedocs.io/en/stable/guide.html#sorting
Reviewed By: bottler
Differential Revision: D35553814
fbshipit-source-id: be49bdb6a4c25264ff8d4db3a601f18736d17be1
Summary:
Added L1 norm for KNN and chamfer op
* The norm is now specified with a variable `norm` which can only be 1 or 2
Reviewed By: bottler
Differential Revision: D35419637
fbshipit-source-id: 77813fec650b30c28342af90d5ed02c89133e136
Summary: Update all FB license strings to the new format.
Reviewed By: patricklabatut
Differential Revision: D33403538
fbshipit-source-id: 97a4596c5c888f3c54f44456dc07e718a387a02c
Summary:
Ran the linter.
TODO: need to update the linter as per D21353065.
Reviewed By: bottler
Differential Revision: D21362270
fbshipit-source-id: ad0e781de0a29f565ad25c43bc94a19b1828c020
Summary:
Updates to:
- enable cuda kernel launches on any GPU (not just the default)
- cuda and contiguous checks for all kernels
- checks to ensure all tensors are on the same device
- error reporting in the cuda kernels
- cuda tests now run on a random device not just the default
Reviewed By: jcjohnson, gkioxari
Differential Revision: D21215280
fbshipit-source-id: 1bedc9fe6c35e9e920bdc4d78ed12865b1005519
Summary:
Modify test_chamfer for more robustness. Avoid empty pointclouds, including where point_reduction is mean, for which we currently return nan (*), and so that we aren't looking at an empty gradient. Make sure we aren't using padding as points in the homogenous cases in the tests, which will lead to a tie between closest points and therefore a potential instability in the gradient - see https://github.com/pytorch/pytorch/issues/35699.
(*) This doesn't attempt to fix the nan.
Reviewed By: nikhilaravi, gkioxari
Differential Revision: D21157322
fbshipit-source-id: a609e84e25a24379c8928ff645d587552526e4af
Summary:
Allow Pointclouds objects and heterogenous data to be provided for Chamfer loss. Remove "none" as an option for point_reduction because it doesn't make sense and in the current implementation is effectively the same as "sum".
Possible improvement: create specialised operations for sum and cosine_similarity of padded tensors, to avoid having to create masks. sum would be useful elsewhere.
Reviewed By: gkioxari
Differential Revision: D20816301
fbshipit-source-id: 0f32073210225d157c029d80de450eecdb64f4d2
Summary: use assertClose in some tests, which enforces shape equality. Fixes some small problems, including graph_conv on an empty graph.
Reviewed By: nikhilaravi
Differential Revision: D20556912
fbshipit-source-id: 60a61eafe3c03ce0f6c9c1a842685708fb10ac5b
Summary: The shebang line `#!<path to interpreter>` is only required for Python scripts, so remove it on source files for class or function definitions. Additionally explicitly mark as executable the actual Python scripts in the codebase.
Reviewed By: nikhilaravi
Differential Revision: D20095778
fbshipit-source-id: d312599fba485e978a243292f88a180d71e1b55a