Summary:
Applies new import merging and sorting from µsort v1.0.
When merging imports, µsort will make a best-effort to move associated
comments to match merged elements, but there are known limitations due to
the diynamic nature of Python and developer tooling. These changes should
not produce any dangerous runtime changes, but may require touch-ups to
satisfy linters and other tooling.
Note that µsort uses case-insensitive, lexicographical sorting, which
results in a different ordering compared to isort. This provides a more
consistent sorting order, matching the case-insensitive order used when
sorting import statements by module name, and ensures that "frog", "FROG",
and "Frog" always sort next to each other.
For details on µsort's sorting and merging semantics, see the user guide:
https://usort.readthedocs.io/en/stable/guide.html#sorting
Reviewed By: bottler
Differential Revision: D35553814
fbshipit-source-id: be49bdb6a4c25264ff8d4db3a601f18736d17be1
Summary:
Added L1 norm for KNN and chamfer op
* The norm is now specified with a variable `norm` which can only be 1 or 2
Reviewed By: bottler
Differential Revision: D35419637
fbshipit-source-id: 77813fec650b30c28342af90d5ed02c89133e136
Summary: Update all FB license strings to the new format.
Reviewed By: patricklabatut
Differential Revision: D33403538
fbshipit-source-id: 97a4596c5c888f3c54f44456dc07e718a387a02c
Summary:
Ran the linter.
TODO: need to update the linter as per D21353065.
Reviewed By: bottler
Differential Revision: D21362270
fbshipit-source-id: ad0e781de0a29f565ad25c43bc94a19b1828c020
Summary:
Updates to:
- enable cuda kernel launches on any GPU (not just the default)
- cuda and contiguous checks for all kernels
- checks to ensure all tensors are on the same device
- error reporting in the cuda kernels
- cuda tests now run on a random device not just the default
Reviewed By: jcjohnson, gkioxari
Differential Revision: D21215280
fbshipit-source-id: 1bedc9fe6c35e9e920bdc4d78ed12865b1005519
Summary:
Modify test_chamfer for more robustness. Avoid empty pointclouds, including where point_reduction is mean, for which we currently return nan (*), and so that we aren't looking at an empty gradient. Make sure we aren't using padding as points in the homogenous cases in the tests, which will lead to a tie between closest points and therefore a potential instability in the gradient - see https://github.com/pytorch/pytorch/issues/35699.
(*) This doesn't attempt to fix the nan.
Reviewed By: nikhilaravi, gkioxari
Differential Revision: D21157322
fbshipit-source-id: a609e84e25a24379c8928ff645d587552526e4af
Summary:
Allow Pointclouds objects and heterogenous data to be provided for Chamfer loss. Remove "none" as an option for point_reduction because it doesn't make sense and in the current implementation is effectively the same as "sum".
Possible improvement: create specialised operations for sum and cosine_similarity of padded tensors, to avoid having to create masks. sum would be useful elsewhere.
Reviewed By: gkioxari
Differential Revision: D20816301
fbshipit-source-id: 0f32073210225d157c029d80de450eecdb64f4d2
Summary: use assertClose in some tests, which enforces shape equality. Fixes some small problems, including graph_conv on an empty graph.
Reviewed By: nikhilaravi
Differential Revision: D20556912
fbshipit-source-id: 60a61eafe3c03ce0f6c9c1a842685708fb10ac5b
Summary: The shebang line `#!<path to interpreter>` is only required for Python scripts, so remove it on source files for class or function definitions. Additionally explicitly mark as executable the actual Python scripts in the codebase.
Reviewed By: nikhilaravi
Differential Revision: D20095778
fbshipit-source-id: d312599fba485e978a243292f88a180d71e1b55a