diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 35587246..784dcfde 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -8,7 +8,7 @@ We actively welcome your pull requests. However, if you're adding any significant features, please make sure to have a corresponding issue to outline your proposal and motivation and allow time for us to give feedback, *before* you send a PR. We do not always accept new features, and we take the following factors into consideration: -- Whether the same feature can be achieved without modifying PyTorch3d directly. If any aspect of the API is not extensible, please highlight this in an issue so we can work on making this more extensible. +- Whether the same feature can be achieved without modifying PyTorch3D directly. If any aspect of the API is not extensible, please highlight this in an issue so we can work on making this more extensible. - Whether the feature is potentially useful to a large audience, or only to a small portion of users. - Whether the proposed solution has a good design and interface. - Whether the proposed solution adds extra mental/practical overhead to users who don't need such feature. diff --git a/.github/ISSUE_TEMPLATE/bugs.md b/.github/ISSUE_TEMPLATE/bugs.md index d10cb24b..0f2d20f7 100644 --- a/.github/ISSUE_TEMPLATE/bugs.md +++ b/.github/ISSUE_TEMPLATE/bugs.md @@ -1,6 +1,6 @@ --- name: "🐛 Bugs / Unexpected behaviors" -about: Please report unexpected behaviors or bugs in PyTorch3d. +about: Please report unexpected behaviors or bugs in PyTorch3D. --- diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md index 6b6ced73..7a478a0c 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -1,6 +1,6 @@ --- name: "\U0001F680 Feature Request" -about: Submit a proposal/request for a new PyTorch3d feature +about: Submit a proposal/request for a new PyTorch3D feature --- diff --git a/.github/ISSUE_TEMPLATE/questions-help.md b/.github/ISSUE_TEMPLATE/questions-help.md index 01adeba1..6b9d5801 100644 --- a/.github/ISSUE_TEMPLATE/questions-help.md +++ b/.github/ISSUE_TEMPLATE/questions-help.md @@ -1,18 +1,18 @@ --- name: "❓ Questions" -about: How do I do X with PyTorch3d? How does PyTorch3d do X? +about: How do I do X with PyTorch3D? How does PyTorch3D do X? --- -## ❓ Questions on how to use PyTorch3d +## ❓ Questions on how to use PyTorch3D NOTE: -1. If you encountered any errors or unexpected issues while using PyTorch3d and need help resolving them, +1. If you encountered any errors or unexpected issues while using PyTorch3D and need help resolving them, please use the "Bugs / Unexpected behaviors" issue template. 2. We do not answer general machine learning / computer vision questions that are not specific to - PyTorch3d, such as how a model works or what algorithm/methods can be + PyTorch3D, such as how a model works or what algorithm/methods can be used to achieve X. diff --git a/INSTALL.md b/INSTALL.md index 37f49e72..389dbed4 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -5,7 +5,7 @@ ### Core library -The core library is written in PyTorch. Several components have underlying implementation in CUDA for improved performance. A subset of these components have CPU implementations in C++/Pytorch. It is advised to use PyTorch3d with GPU support in order to use all the features. +The core library is written in PyTorch. Several components have underlying implementation in CUDA for improved performance. A subset of these components have CPU implementations in C++/Pytorch. It is advised to use PyTorch3D with GPU support in order to use all the features. - Linux or macOS or Windows - Python ≥ 3.6 @@ -25,7 +25,7 @@ conda install -c conda-forge -c fvcore fvcore ### Tests/Linting and Demos -For developing on top of PyTorch3d or contributing, you will need to run the linter and tests. If you want to run any of the notebook tutorials as `docs/tutorials` you will also need matplotlib. +For developing on top of PyTorch3D or contributing, you will need to run the linter and tests. If you want to run any of the notebook tutorials as `docs/tutorials` you will also need matplotlib. - scikit-image - black - isort diff --git a/LICENSE b/LICENSE index ba19f53d..dc73e2d2 100644 --- a/LICENSE +++ b/LICENSE @@ -1,6 +1,6 @@ BSD License -For PyTorch3d software +For PyTorch3D software Copyright (c) Facebook, Inc. and its affiliates. All rights reserved. diff --git a/README.md b/README.md index ab0e1bdd..ed4a10ad 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ # Introduction -PyTorch3d provides efficient, reusable components for 3D Computer Vision research with [PyTorch](https://pytorch.org). +PyTorch3D provides efficient, reusable components for 3D Computer Vision research with [PyTorch](https://pytorch.org). Key features include: @@ -13,15 +13,15 @@ Key features include: - Efficient operations on triangle meshes (projective transformations, graph convolution, sampling, loss functions) - A differentiable mesh renderer -PyTorch3d is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data. -For this reason, all operators in PyTorch3d: +PyTorch3D is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data. +For this reason, all operators in PyTorch3D: - Are implemented using PyTorch tensors - Can handle minibatches of hetereogenous data - Can be differentiated - Can utilize GPUs for acceleration -Within FAIR, PyTorch3d has been used to power research projects such as [Mesh R-CNN](https://arxiv.org/abs/1906.02739). +Within FAIR, PyTorch3D has been used to power research projects such as [Mesh R-CNN](https://arxiv.org/abs/1906.02739). ## Installation @@ -29,11 +29,11 @@ For detailed instructions refer to [INSTALL.md](INSTALL.md). ## License -PyTorch3d is released under the [BSD-3-Clause License](LICENSE). +PyTorch3D is released under the [BSD-3-Clause License](LICENSE). ## Tutorials -Get started with PyTorch3d by trying one of the tutorial notebooks. +Get started with PyTorch3D by trying one of the tutorial notebooks. ||| |:-----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:| @@ -45,7 +45,7 @@ Get started with PyTorch3d by trying one of the tutorial notebooks. ## Documentation -Learn more about the API by reading the PyTorch3d [documentation](https://pytorch3d.readthedocs.org/). +Learn more about the API by reading the PyTorch3D [documentation](https://pytorch3d.readthedocs.org/). We also have deep dive notes on several API components: @@ -60,11 +60,11 @@ We welcome new contributions to Pytorch3d and we will be actively maintaining th ## Contributors -PyTorch3d is written and maintained by the Facebook AI Research Computer Vision Team. +PyTorch3D is written and maintained by the Facebook AI Research Computer Vision Team. ## Citation -If you find PyTorch3d useful in your research, please cite: +If you find PyTorch3D useful in your research, please cite: ```bibtex @misc{ravi2020pytorch3d, diff --git a/docs/notes/renderer.md b/docs/notes/renderer.md index 9b467717..47d3bfe7 100644 --- a/docs/notes/renderer.md +++ b/docs/notes/renderer.md @@ -19,7 +19,7 @@ In order to experiment with different approaches, we wanted a modular implementa Taking inspiration from existing work [[1](#1), [2](#2)], we have created a new, modular, differentiable renderer with **parallel implementations in PyTorch, C++ and CUDA**, as well as comprehensive documentation and tests, with the aim of helping to further research in this field. -Our implementation decouples the rasterization and shading steps of rendering. The core rasterization step (based on [[2]](#2)) returns several intermediate variables and has an optimized implementation in CUDA. The rest of the pipeline is implemented purely in PyTorch, and is designed to be customized and extended. With this approach, the PyTorch3d differentiable renderer can be imported as a library. +Our implementation decouples the rasterization and shading steps of rendering. The core rasterization step (based on [[2]](#2)) returns several intermediate variables and has an optimized implementation in CUDA. The rest of the pipeline is implemented purely in PyTorch, and is designed to be customized and extended. With this approach, the PyTorch3D differentiable renderer can be imported as a library. ## Get started @@ -36,9 +36,9 @@ First, the image is divided into a coarse grid and mesh faces are allocated to t We additionally introduce a parameter `faces_per_pixel` which allows users to specify the top K faces which should be returned per pixel in the image (as opposed to traditional rasterization which returns only the index of the closest face in the mesh per pixel). The top K face properties can then be aggregated using different methods (such as the sigmoid/softmax approach proposed by Li et at in SoftRasterizer [[2]](#2)). -We compared PyTorch3d with SoftRasterizer to measure the effect of both these design changes on the speed of rasterization. We selected a set of meshes of different sizes from ShapeNetV1 core, and rasterized one mesh in each batch to produce images of different sizes. We report the speed of the forward and backward passes. +We compared PyTorch3D with SoftRasterizer to measure the effect of both these design changes on the speed of rasterization. We selected a set of meshes of different sizes from ShapeNetV1 core, and rasterized one mesh in each batch to produce images of different sizes. We report the speed of the forward and backward passes. -**Fig 1: PyTorch3d Naive vs Coarse-to-fine** +**Fig 1: PyTorch3D Naive vs Coarse-to-fine** This figure shows how the coarse-to-fine strategy for rasterization results in significant speed up compared to naive rasterization for large image size and large mesh sizes. @@ -49,9 +49,9 @@ For small mesh and image sizes, the naive approach is slightly faster. We advise Setting `bin_size = 0` will enable naive rasterization. If `bin_size > 0`, the coarse-to-fine approach is used. The default is `bin_size = None` in which case we set the bin size based on [heuristics](https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/renderer/mesh/rasterize_meshes.py#L92). -**Fig 2: PyTorch3d Coarse-to-fine vs SoftRasterizer** +**Fig 2: PyTorch3D Coarse-to-fine vs SoftRasterizer** -This figure shows the effect of the _combination_ of coarse-to-fine rasterization and caching the faces rasterized per pixel returned from the forward pass. For large meshes and image sizes, we again observe that the PyTorch3d rasterizer is significantly faster, noting that the speed is dominated by the forward pass and the backward pass is very fast. +This figure shows the effect of the _combination_ of coarse-to-fine rasterization and caching the faces rasterized per pixel returned from the forward pass. For large meshes and image sizes, we again observe that the PyTorch3D rasterizer is significantly faster, noting that the speed is dominated by the forward pass and the backward pass is very fast. In the SoftRasterizer implementation, in both the forward and backward pass, there is a loop over every single face in the mesh for every pixel in the image. Therefore, the time for the full forward plus backward pass is ~2x the time for the forward pass. For small mesh and image sizes, the SoftRasterizer approach is slightly faster. @@ -61,19 +61,19 @@ In the SoftRasterizer implementation, in both the forward and backward pass, the ### 2. Support for Heterogeneous Batches -PyTorch3d supports efficient rendering of batches of meshes where each mesh has different numbers of vertices and faces. This is done without using padded inputs. +PyTorch3D supports efficient rendering of batches of meshes where each mesh has different numbers of vertices and faces. This is done without using padded inputs. -We again compare with SoftRasterizer which only supports batches of homogeneous meshes and test two cases: 1) a for loop over meshes in the batch, 2) padded inputs, and compare with the native heterogeneous batching support in PyTorch3d. +We again compare with SoftRasterizer which only supports batches of homogeneous meshes and test two cases: 1) a for loop over meshes in the batch, 2) padded inputs, and compare with the native heterogeneous batching support in PyTorch3D. We group meshes from ShapeNet into bins based on the number of faces in the mesh, and sample to compose a batch. We then render images of fixed size and measure the speed of the forward and backward passes. We tested with a range of increasingly large meshes and bin sizes. -**Fig 3: PyTorch3d heterogeneous batching compared with SoftRasterizer** +**Fig 3: PyTorch3D heterogeneous batching compared with SoftRasterizer** -This shows that for large meshes and large bin width (i.e. more variation in mesh size in the batch) the heterogeneous batching approach in PyTorch3d is faster than either of the workarounds with SoftRasterizer. +This shows that for large meshes and large bin width (i.e. more variation in mesh size in the batch) the heterogeneous batching approach in PyTorch3D is faster than either of the workarounds with SoftRasterizer. (settings: batch size = 16, mesh sizes in bins ranging from 500-350k faces, image size = 64, faces per pixel = 100) @@ -81,14 +81,14 @@ This shows that for large meshes and large bin width (i.e. more variation in mes **NOTE: CUDA Memory usage** -The SoftRasterizer forward CUDA kernel only outputs one `(N, H, W, 4)` FloatTensor compared with the PyTorch3d rasterizer forward CUDA kernel which outputs 4 tensors: +The SoftRasterizer forward CUDA kernel only outputs one `(N, H, W, 4)` FloatTensor compared with the PyTorch3D rasterizer forward CUDA kernel which outputs 4 tensors: - `pix_to_face`, LongTensor `(N, H, W, K)` - `zbuf`, FloatTensor `(N, H, W, K)` - `dist`, FloatTensor `(N, H, W, K)` - `bary_coords`, FloatTensor `(N, H, W, K, 3)` -where **N** = batch size, **H/W** are image height/width, **K** is the faces per pixel. The PyTorch3d backward pass returns gradients for `zbuf`, `dist` and `bary_coords`. +where **N** = batch size, **H/W** are image height/width, **K** is the faces per pixel. The PyTorch3D backward pass returns gradients for `zbuf`, `dist` and `bary_coords`. Returning intermediate variables from rasterization has an associated memory cost. We can calculate the theoretical lower bound on the memory usage for the forward and backward pass as follows: diff --git a/docs/notes/renderer_getting_started.md b/docs/notes/renderer_getting_started.md index acf24c1b..e5260583 100644 --- a/docs/notes/renderer_getting_started.md +++ b/docs/notes/renderer_getting_started.md @@ -34,7 +34,7 @@ The differentiable renderer API is experimental and subject to change!. ### Coordinate transformation conventions -Rendering requires transformations between several different coordinate frames: world space, view/camera space, NDC space and screen space. At each step it is important to know where the camera is located, how the +X, +Y, +Z axes are aligned and the possible range of values. The following figure outlines the conventions used PyTorch3d. +Rendering requires transformations between several different coordinate frames: world space, view/camera space, NDC space and screen space. At each step it is important to know where the camera is located, how the +X, +Y, +Z axes are aligned and the possible range of values. The following figure outlines the conventions used PyTorch3D. @@ -45,18 +45,18 @@ For example, given a teapot mesh, the world coordinate frame, camera coordiante --- -**NOTE: PyTorch3d vs OpenGL** +**NOTE: PyTorch3D vs OpenGL** While we tried to emulate several aspects of OpenGL, there are differences in the coordinate frame conventions. - The default world coordinate frame in PyTorch3D has +Z pointing in to the screen whereas in OpenGL, +Z is pointing out of the screen. Both are right handed. -- The NDC coordinate system in PyTorch3d is **right-handed** compared with a **left-handed** NDC coordinate system in OpenGL (the projection matrix switches the handedness). +- The NDC coordinate system in PyTorch3D is **right-handed** compared with a **left-handed** NDC coordinate system in OpenGL (the projection matrix switches the handedness). --- ### A simple renderer -A renderer in PyTorch3d is composed of a **rasterizer** and a **shader**. Create a renderer in a few simple steps: +A renderer in PyTorch3D is composed of a **rasterizer** and a **shader**. Create a renderer in a few simple steps: ``` # Imports diff --git a/docs/notes/why_pytorch3d.md b/docs/notes/why_pytorch3d.md index 5216318c..4e32449d 100644 --- a/docs/notes/why_pytorch3d.md +++ b/docs/notes/why_pytorch3d.md @@ -1,10 +1,10 @@ --- hide_title: true -sidebar_label: Why PyTorch3d +sidebar_label: Why PyTorch3D --- -# Why PyTorch3d +# Why PyTorch3D Our goal with PyTorch3D is to help accelerate research at the intersection of deep learning and 3D. 3D data is more complex than 2D images and while working on projects such as [Mesh R-CNN](https://github.com/facebookresearch/meshrcnn) and [C3DPO](https://github.com/facebookresearch/c3dpo_nrsfm), we encountered several challenges including 3D data representation, batching, and speed. We have developed many useful operators and abstractions for working on 3D deep learning and want to share this with the community to drive novel research in this area. diff --git a/docs/tutorials/bundle_adjustment.ipynb b/docs/tutorials/bundle_adjustment.ipynb index 4f50cd90..dd4ab92d 100644 --- a/docs/tutorials/bundle_adjustment.ipynb +++ b/docs/tutorials/bundle_adjustment.ipynb @@ -248,7 +248,7 @@ "\n", "**`calc_camera_distance`** compares a pair of cameras. This function is important as it defines the loss that we are minimizing. The method utilizes the `so3_relative_angle` function from the SO3 API.\n", "\n", - "**`get_relative_camera`** computes the parameters of a relative camera that maps between a pair of absolute cameras. Here we utilize the `compose` and `inverse` class methods from the PyTorch3d Transforms API." + "**`get_relative_camera`** computes the parameters of a relative camera that maps between a pair of absolute cameras. Here we utilize the `compose` and `inverse` class methods from the PyTorch3D Transforms API." ] }, { diff --git a/docs/tutorials/camera_position_optimization_with_differentiable_rendering.ipynb b/docs/tutorials/camera_position_optimization_with_differentiable_rendering.ipynb index 9da39eda..945a953b 100644 --- a/docs/tutorials/camera_position_optimization_with_differentiable_rendering.ipynb +++ b/docs/tutorials/camera_position_optimization_with_differentiable_rendering.ipynb @@ -119,7 +119,7 @@ "source": [ "## 1. Load the Obj\n", "\n", - "We will load an obj file and create a **Meshes** object. **Meshes** is a unique datastructure provided in PyTorch3d for working with **batches of meshes of different sizes**. It has several useful class methods which are used in the rendering pipeline. " + "We will load an obj file and create a **Meshes** object. **Meshes** is a unique datastructure provided in PyTorch3D for working with **batches of meshes of different sizes**. It has several useful class methods which are used in the rendering pipeline. " ] }, { @@ -129,7 +129,7 @@ "id": "8d-oREfkrt_Z" }, "source": [ - "If you are running this notebook locally after cloning the PyTorch3d repository, the mesh will already be available. **If using Google Colab, fetch the mesh and save it at the path `data/`**:" + "If you are running this notebook locally after cloning the PyTorch3D repository, the mesh will already be available. **If using Google Colab, fetch the mesh and save it at the path `data/`**:" ] }, { @@ -202,7 +202,7 @@ "source": [ "### Create a renderer\n", "\n", - "A **renderer** in PyTorch3d is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest. \n", + "A **renderer** in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthgraphic/perspective). Here we initialize some of these components and use default values for the rest. \n", "\n", "For optimizing the camera position we will use a renderer which produces a **silhouette** of the object only and does not apply any **lighting** or **shading**. We will also initialize another renderer which applies full **phong shading** and use this for visualizing the outputs. " ] @@ -817,7 +817,7 @@ "source": [ "## 5. Conclusion \n", "\n", - "In this tutorial we learnt how to **load** a mesh from an obj file, initialize a PyTorch3d datastructure called **Meshes**, set up an **Renderer** consisting of a **Rasterizer** and a **Shader**, set up an optimization loop including a **Model** and a **loss function**, and run the optimization. " + "In this tutorial we learnt how to **load** a mesh from an obj file, initialize a PyTorch3D datastructure called **Meshes**, set up an **Renderer** consisting of a **Rasterizer** and a **Shader**, set up an optimization loop including a **Model** and a **loss function**, and run the optimization. " ] } ], diff --git a/docs/tutorials/deform_source_mesh_to_target_mesh.ipynb b/docs/tutorials/deform_source_mesh_to_target_mesh.ipynb index 093e5c7e..098f6f76 100644 --- a/docs/tutorials/deform_source_mesh_to_target_mesh.ipynb +++ b/docs/tutorials/deform_source_mesh_to_target_mesh.ipynb @@ -37,8 +37,8 @@ "We will cover: \n", "\n", "- How to **load a mesh** from an `.obj` file\n", - "- How to use the PyTorch3d **Meshes** datastructure\n", - "- How to use 4 different PyTorch3d **mesh loss functions**\n", + "- How to use the PyTorch3D **Meshes** datastructure\n", + "- How to use 4 different PyTorch3D **mesh loss functions**\n", "- How to set up an **optimization loop**\n", "\n", "\n", @@ -654,7 +654,7 @@ "source": [ "## 6. Conclusion \n", "\n", - "In this tutorial we learnt how to load a mesh from an obj file, initialize a PyTorch3d datastructure called **Meshes**, set up an optimization loop and use four different PyTorch3d mesh loss functions. " + "In this tutorial we learnt how to load a mesh from an obj file, initialize a PyTorch3D datastructure called **Meshes**, set up an optimization loop and use four different PyTorch3D mesh loss functions. " ] } ], diff --git a/docs/tutorials/render_textured_meshes.ipynb b/docs/tutorials/render_textured_meshes.ipynb index 6f05940d..8d0e7eab 100644 --- a/docs/tutorials/render_textured_meshes.ipynb +++ b/docs/tutorials/render_textured_meshes.ipynb @@ -173,7 +173,7 @@ "\n", "Load an `.obj` file and it's associated `.mtl` file and create a **Textures** and **Meshes** object. \n", "\n", - "**Meshes** is a unique datastructure provided in PyTorch3d for working with batches of meshes of different sizes. \n", + "**Meshes** is a unique datastructure provided in PyTorch3D for working with batches of meshes of different sizes. \n", "\n", "**Textures** is an auxillary datastructure for storing texture information about meshes. \n", "\n", @@ -287,7 +287,7 @@ "source": [ "## 2. Create a renderer\n", "\n", - "A renderer in PyTorch3d is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthographic/perspective). Here we initialize some of these components and use default values for the rest.\n", + "A renderer in PyTorch3D is composed of a **rasterizer** and a **shader** which each have a number of subcomponents such as a **camera** (orthographic/perspective). Here we initialize some of these components and use default values for the rest.\n", "\n", "In this example we will first create a **renderer** which uses a **perspective camera**, a **point light** and applies **phong shading**. Then we learn how to vary different components using the modular API. " ] @@ -545,7 +545,7 @@ "source": [ "## 6. Batched Rendering\n", "\n", - "One of the core design choices of the PyTorch3d API is to suport **batched inputs for all components**. \n", + "One of the core design choices of the PyTorch3D API is to suport **batched inputs for all components**. \n", "The renderer and associated components can take batched inputs and **render a batch of output images in one forward pass**. We will now use this feature to render the mesh from many different viewpoints.\n" ] }, @@ -628,7 +628,7 @@ }, "source": [ "## 7. Conclusion\n", - "In this tutorial we learnt how to **load** a textured mesh from an obj file, initialize a PyTorch3d datastructure called **Meshes**, set up an **Renderer** consisting of a **Rasterizer** and a **Shader**, and modify several components of the rendering pipeline. " + "In this tutorial we learnt how to **load** a textured mesh from an obj file, initialize a PyTorch3D datastructure called **Meshes**, set up an **Renderer** consisting of a **Rasterizer** and a **Shader**, and modify several components of the rendering pipeline. " ] } ], diff --git a/pytorch3d/renderer/cameras.py b/pytorch3d/renderer/cameras.py index 228e0160..cbc7c089 100644 --- a/pytorch3d/renderer/cameras.py +++ b/pytorch3d/renderer/cameras.py @@ -342,7 +342,7 @@ class OpenGLOrthographicCameras(TensorProperties): ) ones = torch.ones((self._N), dtype=torch.float32, device=self.device) # NOTE: OpenGL flips handedness of coordinate system between camera - # space and NDC space so z sign is -ve. In PyTorch3d we maintain a + # space and NDC space so z sign is -ve. In PyTorch3D we maintain a # right handed coordinate system throughout. z_sign = +1.0 diff --git a/scripts/build_website.sh b/scripts/build_website.sh index c092a539..ea8c27a4 100644 --- a/scripts/build_website.sh +++ b/scripts/build_website.sh @@ -31,7 +31,7 @@ done echo "-----------------------------------" -echo "Building PyTorch3d Docusaurus site" +echo "Building PyTorch3D Docusaurus site" echo "-----------------------------------" cd website || exit yarn diff --git a/scripts/parse_tutorials.py b/scripts/parse_tutorials.py index 37f13c46..155aebeb 100755 --- a/scripts/parse_tutorials.py +++ b/scripts/parse_tutorials.py @@ -36,7 +36,7 @@ JS_SCRIPTS = """ def gen_tutorials(repo_dir: str) -> None: - """Generate HTML tutorials for PyTorch3d Docusaurus site from Jupyter notebooks. + """Generate HTML tutorials for PyTorch3D Docusaurus site from Jupyter notebooks. Also create ipynb and py versions of tutorial in Docusaurus site for download. diff --git a/setup.py b/setup.py index 9d233555..78a5cdc4 100755 --- a/setup.py +++ b/setup.py @@ -76,7 +76,7 @@ setup( version=__version__, author="FAIR", url="https://github.com/facebookresearch/pytorch3d", - description="PyTorch3d is FAIR's library of reusable components " + description="PyTorch3D is FAIR's library of reusable components " "for deep Learning with 3D data.", packages=find_packages(exclude=("configs", "tests")), install_requires=["torchvision>=0.4", "fvcore"], diff --git a/website/README.md b/website/README.md index 671cf1ad..2fbacd49 100644 --- a/website/README.md +++ b/website/README.md @@ -1,6 +1,6 @@ This website was created with [Docusaurus](https://docusaurus.io/). -# Building the PyTorch3d website +# Building the PyTorch3D website ## Install diff --git a/website/core/Footer.js b/website/core/Footer.js index 465443ba..bada91a1 100644 --- a/website/core/Footer.js +++ b/website/core/Footer.js @@ -16,7 +16,7 @@ function SocialFooter(props) { data-count-href={`${repoUrl}/stargazers`} data-show-count="true" data-count-aria-label="# stargazers on GitHub" - aria-label="Star PyTorch3d on GitHub" + aria-label="Star PyTorch3D on GitHub" > {props.config.projectName}