diff --git a/docs/notes/cameras.md b/docs/notes/cameras.md
index c96e3451..fec21f7c 100644
--- a/docs/notes/cameras.md
+++ b/docs/notes/cameras.md
@@ -1,3 +1,8 @@
+---
+hide_title: true
+sidebar_label: Cameras
+---
+
# Cameras
## Camera Coordinate Systems
@@ -6,7 +11,7 @@ When working with 3D data, there are 4 coordinate systems users need to know
* **World coordinate system**
This is the system the object/scene lives - the world.
* **Camera view coordinate system**
-This is the system that has its origin on the image plane and the `Z`-axis perpendicular to the image plane. In PyTorch3D, we assume that `+X` points left, and `+Y` points up and `+Z` points out from the image plane. The transformation from world to view happens after applying a rotation (`R`) and translation (`T`).
+This is the system that has its origin on the image plane and the `Z`-axis perpendicular to the image plane. In PyTorch3D, we assume that `+X` points left, and `+Y` points up and `+Z` points out from the image plane. The transformation from world to view happens after applying a rotation (`R`) and translation (`T`).
* **NDC coordinate system**
This is the normalized coordinate system that confines in a volume the renderered part of the object/scene. Also known as view volume. Under the PyTorch3D convention, `(+1, +1, znear)` is the top left near corner, and `(-1, -1, zfar)` is the bottom right far corner of the volume. The transformation from view to NDC happens after applying the camera projection matrix (`P`).
* **Screen coordinate system**
@@ -19,7 +24,7 @@ An illustration of the 4 coordinate systems is shown below
Cameras in PyTorch3D transform an object/scene from world to NDC by first transforming the object/scene to view (via transforms `R` and `T`) and then projecting the 3D object/scene to NDC (via the projection matrix `P`, else known as camera matrix). Thus, the camera parameters in `P` are assumed to be in NDC space. If the user has camera parameters in screen space, which is a common use case, the parameters should transformed to NDC (see below for an example)
-We describe the camera types in PyTorch3D and the convention for the camera parameters provided at construction time.
+We describe the camera types in PyTorch3D and the convention for the camera parameters provided at construction time.
### Camera Types
@@ -28,12 +33,12 @@ All cameras inherit from `CamerasBase` which is a base class for all cameras. Py
* `get_world_to_view_transform` which returns a 3D transform from world coordinates to the camera view coordinates (R, T)
* `get_full_projection_transform` which composes the projection transform (P) with the world-to-view transform (R, T)
* `transform_points` which takes a set of input points in world coordinates and projects to NDC coordinates ranging from [-1, -1, znear] to [+1, +1, zfar].
-* `transform_points_screen` which takes a set of input points in world coordinates and projects them to the screen coordinates ranging from [0, 0, znear] to [W-1, H-1, zfar]
+* `transform_points_screen` which takes a set of input points in world coordinates and projects them to the screen coordinates ranging from [0, 0, znear] to [W-1, H-1, zfar]
Users can easily customize their own cameras. For each new camera, users should implement the `get_projection_transform` routine that returns the mapping `P` from camera view coordinates to NDC coordinates.
#### FoVPerspectiveCameras, FoVOrthographicCameras
-These two cameras follow the OpenGL convention for perspective and orthographic cameras respectively. The user provides the near `znear` and far `zfar` field which confines the view volume in the `Z` axis. The view volume in the `XY` plane is defined by field of view angle (`fov`) in the case of `FoVPerspectiveCameras` and by `min_x, min_y, max_x, max_y` in the case of `FoVOrthographicCameras`.
+These two cameras follow the OpenGL convention for perspective and orthographic cameras respectively. The user provides the near `znear` and far `zfar` field which confines the view volume in the `Z` axis. The view volume in the `XY` plane is defined by field of view angle (`fov`) in the case of `FoVPerspectiveCameras` and by `min_x, min_y, max_x, max_y` in the case of `FoVOrthographicCameras`.
#### PerspectiveCameras, OrthographicCameras
These two cameras follow the Multi-View Geometry convention for cameras. The user provides the focal length (`fx`, `fy`) and the principal point (`px`, `py`). For example, `camera = PerspectiveCameras(focal_length=((fx, fy),), principal_point=((px, py),))`
diff --git a/docs/notes/cubify.md b/docs/notes/cubify.md
index 19498a3d..8a63fe83 100644
--- a/docs/notes/cubify.md
+++ b/docs/notes/cubify.md
@@ -1,3 +1,8 @@
+---
+hide_title: true
+sidebar_label: Cubify
+---
+
# Cubify
The [cubify operator](https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/cubify.py) converts an 3D occupancy grid of shape `BxDxHxW`, where `B` is the batch size, into a mesh instantiated as a [Meshes](https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py) data structure of `B` elements. The operator replaces every occupied voxel (if its occupancy probability is greater than a user defined threshold) with a cuboid of 12 faces and 8 vertices. Shared vertices are merged, and internal faces are removed resulting in a **watertight** mesh.
diff --git a/docs/notes/datasets.md b/docs/notes/datasets.md
index 0eead3f9..f090618d 100644
--- a/docs/notes/datasets.md
+++ b/docs/notes/datasets.md
@@ -1,3 +1,8 @@
+---
+hide_title: true
+sidebar_label: Data loaders
+---
+
# Data loaders for common 3D Datasets
### ShapetNetCore
diff --git a/docs/notes/renderer_getting_started.md b/docs/notes/renderer_getting_started.md
index e1838628..da222a7a 100644
--- a/docs/notes/renderer_getting_started.md
+++ b/docs/notes/renderer_getting_started.md
@@ -41,7 +41,7 @@ Rendering requires transformations between several different coordinate frames:
For example, given a teapot mesh, the world coordinate frame, camera coordiante frame and image are show in the figure below. Note that the world and camera coordinate frames have the +z direction pointing in to the page.
-
+
---
diff --git a/website/pages/en/index.js b/website/pages/en/index.js
index 2cc22630..2f13738e 100644
--- a/website/pages/en/index.js
+++ b/website/pages/en/index.js
@@ -131,10 +131,7 @@ loss_chamfer, _ = chamfer_distance(sample_sphere, sample_test)