Summary: Conversion from point clouds to volumes ``` Benchmark Avg Time(μs) Peak Time(μs) Iterations -------------------------------------------------------------------------------- ADD_POINTS_TO_VOLUMES_10_trilinear_[25, 25, 25]_1000 43219 44067 12 ADD_POINTS_TO_VOLUMES_10_trilinear_[25, 25, 25]_10000 43274 45313 12 ADD_POINTS_TO_VOLUMES_10_trilinear_[25, 25, 25]_100000 46281 47100 11 ADD_POINTS_TO_VOLUMES_10_trilinear_[101, 111, 121]_1000 51224 51912 10 ADD_POINTS_TO_VOLUMES_10_trilinear_[101, 111, 121]_10000 52092 54487 10 ADD_POINTS_TO_VOLUMES_10_trilinear_[101, 111, 121]_100000 59262 60514 9 ADD_POINTS_TO_VOLUMES_10_nearest_[25, 25, 25]_1000 15998 17237 32 ADD_POINTS_TO_VOLUMES_10_nearest_[25, 25, 25]_10000 15964 16994 32 ADD_POINTS_TO_VOLUMES_10_nearest_[25, 25, 25]_100000 16881 19286 30 ADD_POINTS_TO_VOLUMES_10_nearest_[101, 111, 121]_1000 19150 25277 27 ADD_POINTS_TO_VOLUMES_10_nearest_[101, 111, 121]_10000 18746 19999 27 ADD_POINTS_TO_VOLUMES_10_nearest_[101, 111, 121]_100000 22321 24568 23 ADD_POINTS_TO_VOLUMES_100_trilinear_[25, 25, 25]_1000 49693 50288 11 ADD_POINTS_TO_VOLUMES_100_trilinear_[25, 25, 25]_10000 51429 52449 10 ADD_POINTS_TO_VOLUMES_100_trilinear_[25, 25, 25]_100000 237076 237377 3 ADD_POINTS_TO_VOLUMES_100_trilinear_[101, 111, 121]_1000 81875 82597 7 ADD_POINTS_TO_VOLUMES_100_trilinear_[101, 111, 121]_10000 106671 107045 5 ADD_POINTS_TO_VOLUMES_100_trilinear_[101, 111, 121]_100000 483740 484607 2 ADD_POINTS_TO_VOLUMES_100_nearest_[25, 25, 25]_1000 16667 18143 31 ADD_POINTS_TO_VOLUMES_100_nearest_[25, 25, 25]_10000 17682 18922 29 ADD_POINTS_TO_VOLUMES_100_nearest_[25, 25, 25]_100000 65463 67116 8 ADD_POINTS_TO_VOLUMES_100_nearest_[101, 111, 121]_1000 48058 48826 11 ADD_POINTS_TO_VOLUMES_100_nearest_[101, 111, 121]_10000 53529 53998 10 ADD_POINTS_TO_VOLUMES_100_nearest_[101, 111, 121]_100000 123684 123901 5 -------------------------------------------------------------------------------- ``` Output with `DEBUG=True` {F338561209} Reviewed By: nikhilaravi Differential Revision: D22017500 fbshipit-source-id: ed3e8ed13940c593841d93211623dd533974012f

Introduction
PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch.
Key features include:
- Data structure for storing and manipulating triangle meshes
- Efficient operations on triangle meshes (projective transformations, graph convolution, sampling, loss functions)
- A differentiable mesh renderer
PyTorch3D is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data. For this reason, all operators in PyTorch3D:
- Are implemented using PyTorch tensors
- Can handle minibatches of hetereogenous data
- Can be differentiated
- Can utilize GPUs for acceleration
Within FAIR, PyTorch3D has been used to power research projects such as Mesh R-CNN.
Installation
For detailed instructions refer to INSTALL.md.
License
PyTorch3D is released under the BSD-3-Clause License.
Tutorials
Get started with PyTorch3D by trying one of the tutorial notebooks.
![]() |
![]() |
---|---|
Deform a sphere mesh to dolphin | Bundle adjustment |
![]() |
![]() |
---|---|
Render textured meshes | Camera position optimization |
![]() |
![]() |
---|---|
Render textured pointclouds | Fit a mesh with texture |
![]() |
![]() |
---|---|
Render DensePose data | Load & Render ShapeNet data |
Documentation
Learn more about the API by reading the PyTorch3D documentation.
We also have deep dive notes on several API components:
Overview Video
We have created a short (~14 min) video tutorial providing an overview of the PyTorch3D codebase including several code examples. Click on the image below to watch the video on YouTube:
Development
We welcome new contributions to PyTorch3D and we will be actively maintaining this library! Please refer to CONTRIBUTING.md for full instructions on how to run the code, tests and linter, and submit your pull requests.
Contributors
PyTorch3D is written and maintained by the Facebook AI Research Computer Vision Team.
In alphabetical order:
- Amitav Baruah
- Steve Branson
- Luya Gao
- Georgia Gkioxari
- Taylor Gordon
- Justin Johnson
- Patrick Labtut
- Christoph Lassner
- Wan-Yen Lo
- David Novotny
- Nikhila Ravi
- Jeremy Reizenstein
- Dave Schnizlein
- Roman Shapovalov
- Olivia Wiles
Citation
If you find PyTorch3D useful in your research, please cite our tech report:
@article{ravi2020pytorch3d,
author = {Nikhila Ravi and Jeremy Reizenstein and David Novotny and Taylor Gordon
and Wan-Yen Lo and Justin Johnson and Georgia Gkioxari},
title = {Accelerating 3D Deep Learning with PyTorch3D},
journal = {arXiv:2007.08501},
year = {2020},
}
If you are using the pulsar backend for sphere-rendering (the PulsarPointRenderer
or pytorch3d.renderer.points.pulsar.Renderer
), please cite the tech report:
@article{lassner2020pulsar,
author = {Christoph Lassner},
title = {Fast Differentiable Raycasting for Neural Rendering using Sphere-based Representations},
journal = {arXiv:2004.07484},
year = {2020},
}
News
Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under Releases
, and the builds can be installed using conda
as per the instructions in INSTALL.md.
[November 2nd 2020]: PyTorch3D v0.3 released, integrating the pulsar backend.
[Aug 28th 2020]: PyTorch3D v0.2.5 released
[July 17th 2020]: PyTorch3D tech report published on ArXiv: https://arxiv.org/abs/2007.08501
[April 24th 2020]: PyTorch3D v0.2 released
[March 25th 2020]: SynSin codebase released using PyTorch3D: https://github.com/facebookresearch/synsin
[March 8th 2020]: PyTorch3D v0.1.1 bug fix release
[Jan 23rd 2020]: PyTorch3D v0.1 released. Mesh R-CNN codebase released: https://github.com/facebookresearch/meshrcnn