dhb 092400f1e7 allow saving vertex normal in save_obj (#1511)
Summary:
Although we can load per-vertex normals in `load_obj`, saving per-vertex normals is not supported in `save_obj`.

This patch fixes this by allowing passing per-vertex normal data in `save_obj`:
``` python
def save_obj(
    f: PathOrStr,
    verts,
    faces,
    decimal_places: Optional[int] = None,
    path_manager: Optional[PathManager] = None,
    *,
    verts_normals: Optional[torch.Tensor] = None,
    faces_normals: Optional[torch.Tensor] = None,
    verts_uvs: Optional[torch.Tensor] = None,
    faces_uvs: Optional[torch.Tensor] = None,
    texture_map: Optional[torch.Tensor] = None,
) -> None:
    """
    Save a mesh to an .obj file.

    Args:
        f: File (str or path) to which the mesh should be written.
        verts: FloatTensor of shape (V, 3) giving vertex coordinates.
        faces: LongTensor of shape (F, 3) giving faces.
        decimal_places: Number of decimal places for saving.
        path_manager: Optional PathManager for interpreting f if
            it is a str.
        verts_normals: FloatTensor of shape (V, 3) giving the normal per vertex.
        faces_normals: LongTensor of shape (F, 3) giving the index into verts_normals
            for each vertex in the face.
        verts_uvs: FloatTensor of shape (V, 2) giving the uv coordinate per vertex.
        faces_uvs: LongTensor of shape (F, 3) giving the index into verts_uvs for
            each vertex in the face.
        texture_map: FloatTensor of shape (H, W, 3) representing the texture map
            for the mesh which will be saved as an image. The values are expected
            to be in the range [0, 1],
    """
```

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1511

Reviewed By: shapovalov

Differential Revision: D45086045

Pulled By: bottler

fbshipit-source-id: 666efb0d2c302df6cf9f2f6601d83a07856bf32f
2023-05-07 06:32:02 -07:00
2022-09-07 14:21:15 -07:00
2023-01-16 07:41:46 -08:00
2023-03-24 07:27:39 -07:00
2022-01-04 11:43:38 -08:00
2020-06-09 13:20:47 -07:00
2020-03-29 14:51:02 -07:00
2020-02-19 23:16:50 -08:00
2023-04-04 07:48:02 -07:00
2022-01-04 11:43:38 -08:00
2023-04-04 07:48:02 -07:00
2022-01-04 11:43:38 -08:00
2023-04-25 09:56:15 -07:00

CircleCI Anaconda-Server Badge

Introduction

PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch.

Key features include:

  • Data structure for storing and manipulating triangle meshes
  • Efficient operations on triangle meshes (projective transformations, graph convolution, sampling, loss functions)
  • A differentiable mesh renderer
  • Implicitron, see its README, a framework for new-view synthesis via implicit representations. (blog post)

PyTorch3D is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data. For this reason, all operators in PyTorch3D:

  • Are implemented using PyTorch tensors
  • Can handle minibatches of hetereogenous data
  • Can be differentiated
  • Can utilize GPUs for acceleration

Within FAIR, PyTorch3D has been used to power research projects such as Mesh R-CNN.

See our blog post to see more demos and learn about PyTorch3D.

Installation

For detailed instructions refer to INSTALL.md.

License

PyTorch3D is released under the BSD License.

Tutorials

Get started with PyTorch3D by trying one of the tutorial notebooks.

Deform a sphere mesh to dolphin Bundle adjustment
Render textured meshes Camera position optimization
Render textured pointclouds Fit a mesh with texture
Render DensePose data Load & Render ShapeNet data
Fit Textured Volume Fit A Simple Neural Radiance Field
Fit Textured Volume in Implicitron Implicitron Config System

Documentation

Learn more about the API by reading the PyTorch3D documentation.

We also have deep dive notes on several API components:

Overview Video

We have created a short (~14 min) video tutorial providing an overview of the PyTorch3D codebase including several code examples. Click on the image below to watch the video on YouTube:

Development

We welcome new contributions to PyTorch3D and we will be actively maintaining this library! Please refer to CONTRIBUTING.md for full instructions on how to run the code, tests and linter, and submit your pull requests.

Development and Compatibility

  • main branch: actively developed, without any guarantee, Anything can be broken at any time
    • REMARK: this includes nightly builds which are built from main
    • HINT: the commit history can help locate regressions or changes
  • backward-compatibility between releases: no guarantee. Best efforts to communicate breaking changes and facilitate migration of code or data (incl. models).

Contributors

PyTorch3D is written and maintained by the Facebook AI Research Computer Vision Team.

In alphabetical order:

  • Amitav Baruah
  • Steve Branson
  • Krzysztof Chalupka
  • Jiali Duan
  • Luya Gao
  • Georgia Gkioxari
  • Taylor Gordon
  • Justin Johnson
  • Patrick Labatut
  • Christoph Lassner
  • Wan-Yen Lo
  • David Novotny
  • Nikhila Ravi
  • Jeremy Reizenstein
  • Dave Schnizlein
  • Roman Shapovalov
  • Olivia Wiles

Citation

If you find PyTorch3D useful in your research, please cite our tech report:

@article{ravi2020pytorch3d,
    author = {Nikhila Ravi and Jeremy Reizenstein and David Novotny and Taylor Gordon
                  and Wan-Yen Lo and Justin Johnson and Georgia Gkioxari},
    title = {Accelerating 3D Deep Learning with PyTorch3D},
    journal = {arXiv:2007.08501},
    year = {2020},
}

If you are using the pulsar backend for sphere-rendering (the PulsarPointRenderer or pytorch3d.renderer.points.pulsar.Renderer), please cite the tech report:

@article{lassner2020pulsar,
    author = {Christoph Lassner and Michael Zollh\"ofer},
    title = {Pulsar: Efficient Sphere-based Neural Rendering},
    journal = {arXiv:2004.07484},
    year = {2020},
}

News

Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under Releases, and the builds can be installed using conda as per the instructions in INSTALL.md.

[Dec 19th 2022]: PyTorch3D v0.7.2 released.

[Oct 23rd 2022]: PyTorch3D v0.7.1 released.

[Aug 10th 2022]: PyTorch3D v0.7.0 released with Implicitron and MeshRasterizerOpenGL.

[Apr 28th 2022]: PyTorch3D v0.6.2 released

[Dec 16th 2021]: PyTorch3D v0.6.1 released

[Oct 6th 2021]: PyTorch3D v0.6.0 released

[Aug 5th 2021]: PyTorch3D v0.5.0 released

[Feb 9th 2021]: PyTorch3D v0.4.0 released with support for implicit functions, volume rendering and a reimplementation of NeRF.

[November 2nd 2020]: PyTorch3D v0.3.0 released, integrating the pulsar backend.

[Aug 28th 2020]: PyTorch3D v0.2.5 released

[July 17th 2020]: PyTorch3D tech report published on ArXiv: https://arxiv.org/abs/2007.08501

[April 24th 2020]: PyTorch3D v0.2.0 released

[March 25th 2020]: SynSin codebase released using PyTorch3D: https://github.com/facebookresearch/synsin

[March 8th 2020]: PyTorch3D v0.1.1 bug fix release

[Jan 23rd 2020]: PyTorch3D v0.1.0 released. Mesh R-CNN codebase released: https://github.com/facebookresearch/meshrcnn

Description
PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
Readme BSD-3-Clause 73 MiB
Languages
Python 80.9%
C++ 10.2%
Cuda 6.3%
C 0.9%
Shell 0.8%
Other 0.9%