Summary: pytorch is adding checks that mean integer tensors with requires_grad=True need to be avoided. Fix accidentally creating them.
Reviewed By: jcjohnson, gkioxari
Differential Revision: D21576712
fbshipit-source-id: 008218997986800a36d93caa1a032ee91f2bffcd
Summary: This has been failing intermittently
Reviewed By: nikhilaravi
Differential Revision: D21403157
fbshipit-source-id: 51b74d6c813b52effe72d14b565e250fcabbb463
Summary: Interface and working implementation of ragged KNN. Benchmarks (which aren't ragged) haven't slowed. New benchmark shows that ragged is faster than non-ragged of the same shape.
Reviewed By: jcjohnson
Differential Revision: D20696507
fbshipit-source-id: 21b80f71343a3475c8d3ee0ce2680f92f0fae4de
Summary: Run linter after recent changes. Fix long comment in knn.h which clang-format has reflowed badly. Add crude test that code doesn't call deprecated `.type()` or `.data()`.
Reviewed By: nikhilaravi
Differential Revision: D20692935
fbshipit-source-id: 28ce0308adae79a870cb41a810b7cf8744f41ab8
Summary:
Implements K-Nearest Neighbors with C++ and CUDA versions.
KNN in CUDA is highly nontrivial. I've implemented a few different versions of the kernel, and we heuristically dispatch to different kernels based on the problem size. Some of the kernels rely on template specialization on either D or K, so we use template metaprogramming to compile specialized versions for ranges of D and K.
These kernels are up to 3x faster than our existing 1-nearest-neighbor kernels, so we should also consider swapping out `nn_points_idx` to use these kernels in the backend.
I've been working mostly on the CUDA kernels, and haven't converged on the correct Python API.
I still want to benchmark against FAISS to see how far away we are from their performance.
Reviewed By: bottler
Differential Revision: D19729286
fbshipit-source-id: 608ffbb7030c21fe4008f330522f4890f0c3c21a