mirror of
https://github.com/facebookresearch/pytorch3d.git
synced 2025-12-22 23:30:35 +08:00
spelling
Summary: Collection of spelling things, mostly in docs / tutorials. Reviewed By: gkioxari Differential Revision: D26101323 fbshipit-source-id: 652f62bc9d71a4ff872efa21141225e43191353a
This commit is contained in:
committed by
Facebook GitHub Bot
parent
c2e62a5087
commit
124bb5e391
@@ -17,7 +17,7 @@
|
||||
"\n",
|
||||
"This tutorial shows how to fit a volume given a set of views of a scene using differentiable volumetric rendering.\n",
|
||||
"\n",
|
||||
"More specificially, this tutorial will explain how to:\n",
|
||||
"More specifically, this tutorial will explain how to:\n",
|
||||
"1. Create a differentiable volumetric renderer.\n",
|
||||
"2. Create a Volumetric model (including how to use the `Volumes` class).\n",
|
||||
"3. Fit the volume based on the images using the differentiable volumetric renderer. \n",
|
||||
@@ -138,7 +138,7 @@
|
||||
"The following initializes a volumetric renderer that emits a ray from each pixel of a target image and samples a set of uniformly-spaced points along the ray. At each ray-point, the corresponding density and color value is obtained by querying the corresponding location in the volumetric model of the scene (the model is described & instantiated in a later cell).\n",
|
||||
"\n",
|
||||
"The renderer is composed of a *raymarcher* and a *raysampler*.\n",
|
||||
"- The *raysampler* is responsible for emiting rays from image pixels and sampling the points along them. Here, we use the `NDCGridRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user).\n",
|
||||
"- The *raysampler* is responsible for emitting rays from image pixels and sampling the points along them. Here, we use the `NDCGridRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user).\n",
|
||||
"- The *raymarcher* takes the densities and colors sampled along each ray and renders each ray into a color and an opacity value of the ray's source pixel. Here we use the `EmissionAbsorptionRaymarcher` which implements the standard Emission-Absorption raymarching algorithm."
|
||||
]
|
||||
},
|
||||
@@ -161,11 +161,11 @@
|
||||
"\n",
|
||||
"# 1) Instantiate the raysampler.\n",
|
||||
"# Here, NDCGridRaysampler generates a rectangular image\n",
|
||||
"# grid of rays whose coordinates follow the pytorch3d\n",
|
||||
"# grid of rays whose coordinates follow the PyTorch3D\n",
|
||||
"# coordinate conventions.\n",
|
||||
"# Since we use a volume of size 128^3, we sample n_pts_per_ray=150,\n",
|
||||
"# which roughly corresponds to a one ray-point per voxel.\n",
|
||||
"# We futher set the min_depth=0.1 since there is no surface within\n",
|
||||
"# We further set the min_depth=0.1 since there is no surface within\n",
|
||||
"# 0.1 units of any camera plane.\n",
|
||||
"raysampler = NDCGridRaysampler(\n",
|
||||
" image_width=render_size,\n",
|
||||
@@ -334,7 +334,7 @@
|
||||
" batch_cameras\n",
|
||||
" ).split([3, 1], dim=-1)\n",
|
||||
" \n",
|
||||
" # Compute the silhoutte error as the mean huber\n",
|
||||
" # Compute the silhouette error as the mean huber\n",
|
||||
" # loss between the predicted masks and the\n",
|
||||
" # target silhouettes.\n",
|
||||
" sil_err = huber(\n",
|
||||
|
||||
Reference in New Issue
Block a user