Summary:
Implements K-Nearest Neighbors with C++ and CUDA versions.
KNN in CUDA is highly nontrivial. I've implemented a few different versions of the kernel, and we heuristically dispatch to different kernels based on the problem size. Some of the kernels rely on template specialization on either D or K, so we use template metaprogramming to compile specialized versions for ranges of D and K.
These kernels are up to 3x faster than our existing 1-nearest-neighbor kernels, so we should also consider swapping out `nn_points_idx` to use these kernels in the backend.
I've been working mostly on the CUDA kernels, and haven't converged on the correct Python API.
I still want to benchmark against FAISS to see how far away we are from their performance.
Reviewed By: bottler
Differential Revision: D19729286
fbshipit-source-id: 608ffbb7030c21fe4008f330522f4890f0c3c21a