mirror of
https://github.com/facebookresearch/pytorch3d.git
synced 2025-12-22 07:10:34 +08:00
update for version 0.5.0
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>batching · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Batching"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="batching · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Batching"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>batching · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Batching"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="batching · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Batching"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -63,7 +63,7 @@
|
||||
}
|
||||
});
|
||||
</script></nav></div><div class="container mainContainer docsContainer"><div class="wrapper"><div class="post"><header class="postHeader"></header><article><div><span><h1><a class="anchor" aria-hidden="true" id="batching"></a><a href="#batching" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Batching</h1>
|
||||
<p>In deep learning, every optimization step operates on multiple input examples for robust training. Thus, efficient batching is crucial. For image inputs, batching is straighforward; N images are resized to the same height and width and stacked as a 4 dimensional tensor of shape <code>N x 3 x H x W</code>. For meshes, batching is less straighforward.</p>
|
||||
<p>In deep learning, every optimization step operates on multiple input examples for robust training. Thus, efficient batching is crucial. For image inputs, batching is straightforward; N images are resized to the same height and width and stacked as a 4 dimensional tensor of shape <code>N x 3 x H x W</code>. For meshes, batching is less straightforward.</p>
|
||||
<p><img src="assets/batch_intro.png" alt="batch_intro" align="middle"/></p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="batch-modes-for-meshes"></a><a href="#batch-modes-for-meshes" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Batch modes for meshes</h2>
|
||||
<p>Assume you want to construct a batch containing two meshes, with <code>mesh1 = (v1: V1 x 3, f1: F1 x 3)</code> containing <code>V1</code> vertices and <code>F1</code> faces, and <code>mesh2 = (v2: V2 x 3, f2: F2 x 3)</code> with <code>V2 (!= V1)</code> vertices and <code>F2 (!= F1)</code> faces. The <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py">Meshes</a> data structure provides three different ways to batch <em>heterogeneous</em> meshes. If <code>meshes = Meshes(verts = [v1, v2], faces = [f1, f2])</code> is an instantiation of the data structure, then</p>
|
||||
@@ -74,6 +74,6 @@
|
||||
</ul>
|
||||
<p><img src="assets/batch_modes.gif" alt="batch_modes" height="450" align="middle" /></p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="use-cases-for-batch-modes"></a><a href="#use-cases-for-batch-modes" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Use cases for batch modes</h2>
|
||||
<p>The need for different mesh batch modes is inherent to the way pytorch operators are implemented. To fully utilize the optimized pytorch ops, the <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py">Meshes</a> data structure allows for efficient conversion between the different batch modes. This is crucial when aiming for a fast and efficient training cycle. An example of this is <a href="https://github.com/facebookresearch/meshrcnn">Mesh R-CNN</a>. Here, in the same forward pass different parts of the network assume different inputs, which are computed by converting between the different batch modes. In particular, <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/vert_align.py">vert_align</a> assumes a <em>padded</em> input tensor while immediately after <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/graph_conv.py">graph_conv</a> assumes a <em>packed</em> input tensor.</p>
|
||||
<p>The need for different mesh batch modes is inherent to the way PyTorch operators are implemented. To fully utilize the optimized PyTorch ops, the <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py">Meshes</a> data structure allows for efficient conversion between the different batch modes. This is crucial when aiming for a fast and efficient training cycle. An example of this is <a href="https://github.com/facebookresearch/meshrcnn">Mesh R-CNN</a>. Here, in the same forward pass different parts of the network assume different inputs, which are computed by converting between the different batch modes. In particular, <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/vert_align.py">vert_align</a> assumes a <em>padded</em> input tensor while immediately after <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/graph_conv.py">graph_conv</a> assumes a <em>packed</em> input tensor.</p>
|
||||
<p><img src="assets/meshrcnn.png" alt="meshrcnn" width="700" align="middle" /></p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/datasets"><span class="arrow-prev">← </span><span>Data loaders</span></a><a class="docs-next button" href="/docs/cubify"><span>Cubify</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#batch-modes-for-meshes">Batch modes for meshes</a></li><li><a href="#use-cases-for-batch-modes">Use cases for batch modes</a></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Jeremy Reizenstein</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/datasets"><span class="arrow-prev">← </span><span>Data loaders</span></a><a class="docs-next button" href="/docs/cubify"><span>Cubify</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#batch-modes-for-meshes">Batch modes for meshes</a></li><li><a href="#use-cases-for-batch-modes">Use cases for batch modes</a></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>batching · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Batching"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="batching · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Batching"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>batching · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Batching"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="batching · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Batching"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -63,7 +63,7 @@
|
||||
}
|
||||
});
|
||||
</script></nav></div><div class="container mainContainer docsContainer"><div class="wrapper"><div class="post"><header class="postHeader"></header><article><div><span><h1><a class="anchor" aria-hidden="true" id="batching"></a><a href="#batching" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Batching</h1>
|
||||
<p>In deep learning, every optimization step operates on multiple input examples for robust training. Thus, efficient batching is crucial. For image inputs, batching is straighforward; N images are resized to the same height and width and stacked as a 4 dimensional tensor of shape <code>N x 3 x H x W</code>. For meshes, batching is less straighforward.</p>
|
||||
<p>In deep learning, every optimization step operates on multiple input examples for robust training. Thus, efficient batching is crucial. For image inputs, batching is straightforward; N images are resized to the same height and width and stacked as a 4 dimensional tensor of shape <code>N x 3 x H x W</code>. For meshes, batching is less straightforward.</p>
|
||||
<p><img src="assets/batch_intro.png" alt="batch_intro" align="middle"/></p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="batch-modes-for-meshes"></a><a href="#batch-modes-for-meshes" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Batch modes for meshes</h2>
|
||||
<p>Assume you want to construct a batch containing two meshes, with <code>mesh1 = (v1: V1 x 3, f1: F1 x 3)</code> containing <code>V1</code> vertices and <code>F1</code> faces, and <code>mesh2 = (v2: V2 x 3, f2: F2 x 3)</code> with <code>V2 (!= V1)</code> vertices and <code>F2 (!= F1)</code> faces. The <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py">Meshes</a> data structure provides three different ways to batch <em>heterogeneous</em> meshes. If <code>meshes = Meshes(verts = [v1, v2], faces = [f1, f2])</code> is an instantiation of the data structure, then</p>
|
||||
@@ -74,6 +74,6 @@
|
||||
</ul>
|
||||
<p><img src="assets/batch_modes.gif" alt="batch_modes" height="450" align="middle" /></p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="use-cases-for-batch-modes"></a><a href="#use-cases-for-batch-modes" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Use cases for batch modes</h2>
|
||||
<p>The need for different mesh batch modes is inherent to the way pytorch operators are implemented. To fully utilize the optimized pytorch ops, the <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py">Meshes</a> data structure allows for efficient conversion between the different batch modes. This is crucial when aiming for a fast and efficient training cycle. An example of this is <a href="https://github.com/facebookresearch/meshrcnn">Mesh R-CNN</a>. Here, in the same forward pass different parts of the network assume different inputs, which are computed by converting between the different batch modes. In particular, <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/vert_align.py">vert_align</a> assumes a <em>padded</em> input tensor while immediately after <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/graph_conv.py">graph_conv</a> assumes a <em>packed</em> input tensor.</p>
|
||||
<p>The need for different mesh batch modes is inherent to the way PyTorch operators are implemented. To fully utilize the optimized PyTorch ops, the <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py">Meshes</a> data structure allows for efficient conversion between the different batch modes. This is crucial when aiming for a fast and efficient training cycle. An example of this is <a href="https://github.com/facebookresearch/meshrcnn">Mesh R-CNN</a>. Here, in the same forward pass different parts of the network assume different inputs, which are computed by converting between the different batch modes. In particular, <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/vert_align.py">vert_align</a> assumes a <em>padded</em> input tensor while immediately after <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/graph_conv.py">graph_conv</a> assumes a <em>packed</em> input tensor.</p>
|
||||
<p><img src="assets/meshrcnn.png" alt="meshrcnn" width="700" align="middle" /></p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/datasets"><span class="arrow-prev">← </span><span>Data loaders</span></a><a class="docs-next button" href="/docs/cubify"><span>Cubify</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#batch-modes-for-meshes">Batch modes for meshes</a></li><li><a href="#use-cases-for-batch-modes">Use cases for batch modes</a></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Jeremy Reizenstein</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/datasets"><span class="arrow-prev">← </span><span>Data loaders</span></a><a class="docs-next button" href="/docs/cubify"><span>Cubify</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#batch-modes-for-meshes">Batch modes for meshes</a></li><li><a href="#use-cases-for-batch-modes">Use cases for batch modes</a></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>cameras · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Cameras"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="cameras · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Cameras"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>cameras · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Cameras"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="cameras · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Cameras"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -71,45 +71,68 @@ This is the system the object/scene lives - the world.</li>
|
||||
<li><strong>Camera view coordinate system</strong>
|
||||
This is the system that has its origin on the image plane and the <code>Z</code>-axis perpendicular to the image plane. In PyTorch3D, we assume that <code>+X</code> points left, and <code>+Y</code> points up and <code>+Z</code> points out from the image plane. The transformation from world to view happens after applying a rotation (<code>R</code>) and translation (<code>T</code>).</li>
|
||||
<li><strong>NDC coordinate system</strong>
|
||||
This is the normalized coordinate system that confines in a volume the renderered part of the object/scene. Also known as view volume. Under the PyTorch3D convention, <code>(+1, +1, znear)</code> is the top left near corner, and <code>(-1, -1, zfar)</code> is the bottom right far corner of the volume. The transformation from view to NDC happens after applying the camera projection matrix (<code>P</code>).</li>
|
||||
This is the normalized coordinate system that confines in a volume the rendered part of the object/scene. Also known as view volume. Under the PyTorch3D convention, <code>(+1, +1, znear)</code> is the top left near corner, and <code>(-1, -1, zfar)</code> is the bottom right far corner of the volume. For non-square volumes, the side of the volume in <code>XY</code> with the smallest length ranges from <code>[-1, 1]</code> while the larger side from <code>[-s, s]</code>, where <code>s</code> is the aspect ratio and <code>s > 1</code> (larger divided by smaller side).
|
||||
The transformation from view to NDC happens after applying the camera projection matrix (<code>P</code>).</li>
|
||||
<li><strong>Screen coordinate system</strong>
|
||||
This is another representation of the view volume with the <code>XY</code> coordinates defined in pixel space instead of a normalized space.</li>
|
||||
</ul>
|
||||
<p>An illustration of the 4 coordinate systems is shown below
|
||||
<img src="https://user-images.githubusercontent.com/4369065/90317960-d9b8db80-dee1-11ea-8088-39c414b1e2fa.png" alt="cameras"></p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="defining-cameras-in-pytorch3d"></a><a href="#defining-cameras-in-pytorch3d" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Defining Cameras in PyTorch3D</h2>
|
||||
<p>Cameras in PyTorch3D transform an object/scene from world to NDC by first transforming the object/scene to view (via transforms <code>R</code> and <code>T</code>) and then projecting the 3D object/scene to NDC (via the projection matrix <code>P</code>, else known as camera matrix). Thus, the camera parameters in <code>P</code> are assumed to be in NDC space. If the user has camera parameters in screen space, which is a common use case, the parameters should transformed to NDC (see below for an example)</p>
|
||||
<p>We describe the camera types in PyTorch3D and the convention for the camera parameters provided at construction time.</p>
|
||||
<p>Cameras in PyTorch3D transform an object/scene from world to view by first transforming the object/scene to view (via transforms <code>R</code> and <code>T</code>) and then projecting the 3D object/scene to a normalized space via the projection matrix <code>P = K[R | T]</code>, where <code>K</code> is the intrinsic matrix. The camera parameters in <code>K</code> define the normalized space. If users define the camera parameters in NDC space, then the transform projects points to NDC. If the camera parameters are defined in screen space, the transformed points are in screen space.</p>
|
||||
<p>Note that the base <code>CamerasBase</code> class makes no assumptions about the coordinate systems. All the above transforms are geometric transforms defined purely by <code>R</code>, <code>T</code> and <code>K</code>. This means that users can define cameras in any coordinate system and for any transforms. The method <code>transform_points</code> will apply <code>K</code> , <code>R</code> and <code>T</code> to the input points as a simple matrix transformation. However, if users wish to use cameras with the PyTorch3D renderer, they need to abide to PyTorch3D's coordinate system assumptions (read below).</p>
|
||||
<p>We provide instantiations of common camera types in PyTorch3D and how users can flexibly define the projection space below.</p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="interfacing-with-the-pytorch3d-renderer"></a><a href="#interfacing-with-the-pytorch3d-renderer" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Interfacing with the PyTorch3D Renderer</h2>
|
||||
<p>The PyTorch3D renderer for both meshes and point clouds assumes that the camera transformed points, meaning the points passed as input to the rasterizer, are in PyTorch3D's NDC space. So to get the expected rendering outcome, users need to make sure that their 3D input data and cameras abide by these PyTorch3D coordinate system assumptions. The PyTorch3D coordinate system assumes <code>+X:left</code>, <code>+Y: up</code> and <code>+Z: from us to scene</code> (right-handed) . Confusions regarding coordinate systems are common so we advise that you spend some time understanding your data and the coordinate system they live in and transform them accordingly before using the PyTorch3D renderer.</p>
|
||||
<p>Examples of cameras and how they interface with the PyTorch3D renderer can be found in our tutorials.</p>
|
||||
<h3><a class="anchor" aria-hidden="true" id="camera-types"></a><a href="#camera-types" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Camera Types</h3>
|
||||
<p>All cameras inherit from <code>CamerasBase</code> which is a base class for all cameras. PyTorch3D provides four different camera types. The <code>CamerasBase</code> defines methods that are common to all camera models:</p>
|
||||
<ul>
|
||||
<li><code>get_camera_center</code> that returns the optical center of the camera in world coordinates</li>
|
||||
<li><code>get_world_to_view_transform</code> which returns a 3D transform from world coordinates to the camera view coordinates (R, T)</li>
|
||||
<li><code>get_full_projection_transform</code> which composes the projection transform (P) with the world-to-view transform (R, T)</li>
|
||||
<li><code>get_world_to_view_transform</code> which returns a 3D transform from world coordinates to the camera view coordinates <code>(R, T)</code></li>
|
||||
<li><code>get_full_projection_transform</code> which composes the projection transform (<code>K</code>) with the world-to-view transform <code>(R, T)</code></li>
|
||||
<li><code>transform_points</code> which takes a set of input points in world coordinates and projects to NDC coordinates ranging from [-1, -1, znear] to [+1, +1, zfar].</li>
|
||||
<li><code>get_ndc_camera_transform</code> which defines the conversion to PyTorch3D's NDC space and is called when interfacing with the PyTorch3D renderer. If the camera is defined in NDC space, then the identity transform is returned. If the cameras is defined in screen space, the conversion from screen to NDC is returned. If users define their own camera in screen space, they need to think of the screen to NDC conversion. We provide examples for the <code>PerspectiveCameras</code> and <code>OrthographicCameras</code>.</li>
|
||||
<li><code>transform_points_ndc</code> which takes a set of points in world coordinates and projects them to PyTorch3D's NDC space</li>
|
||||
<li><code>transform_points_screen</code> which takes a set of input points in world coordinates and projects them to the screen coordinates ranging from [0, 0, znear] to [W-1, H-1, zfar]</li>
|
||||
</ul>
|
||||
<p>Users can easily customize their own cameras. For each new camera, users should implement the <code>get_projection_transform</code> routine that returns the mapping <code>P</code> from camera view coordinates to NDC coordinates.</p>
|
||||
<h4><a class="anchor" aria-hidden="true" id="fovperspectivecameras-fovorthographiccameras"></a><a href="#fovperspectivecameras-fovorthographiccameras" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>FoVPerspectiveCameras, FoVOrthographicCameras</h4>
|
||||
<p>These two cameras follow the OpenGL convention for perspective and orthographic cameras respectively. The user provides the near <code>znear</code> and far <code>zfar</code> field which confines the view volume in the <code>Z</code> axis. The view volume in the <code>XY</code> plane is defined by field of view angle (<code>fov</code>) in the case of <code>FoVPerspectiveCameras</code> and by <code>min_x, min_y, max_x, max_y</code> in the case of <code>FoVOrthographicCameras</code>.</p>
|
||||
<p>These two cameras follow the OpenGL convention for perspective and orthographic cameras respectively. The user provides the near <code>znear</code> and far <code>zfar</code> field which confines the view volume in the <code>Z</code> axis. The view volume in the <code>XY</code> plane is defined by field of view angle (<code>fov</code>) in the case of <code>FoVPerspectiveCameras</code> and by <code>min_x, min_y, max_x, max_y</code> in the case of <code>FoVOrthographicCameras</code>.
|
||||
These cameras are by default in NDC space.</p>
|
||||
<h4><a class="anchor" aria-hidden="true" id="perspectivecameras-orthographiccameras"></a><a href="#perspectivecameras-orthographiccameras" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>PerspectiveCameras, OrthographicCameras</h4>
|
||||
<p>These two cameras follow the Multi-View Geometry convention for cameras. The user provides the focal length (<code>fx</code>, <code>fy</code>) and the principal point (<code>px</code>, <code>py</code>). For example, <code>camera = PerspectiveCameras(focal_length=((fx, fy),), principal_point=((px, py),))</code></p>
|
||||
<p>As mentioned above, the focal length and principal point are used to convert a point <code>(X, Y, Z)</code> from view coordinates to NDC coordinates, as follows</p>
|
||||
<pre><code class="hljs"><span class="hljs-comment"># for perspective</span>
|
||||
<span class="hljs-attr">x_ndc</span> = fx * X / Z + px
|
||||
<span class="hljs-attr">y_ndc</span> = fy * Y / Z + py
|
||||
<span class="hljs-attr">z_ndc</span> = <span class="hljs-number">1</span> / Z
|
||||
<p>The camera projection of a 3D point <code>(X, Y, Z)</code> in view coordinates to a point <code>(x, y, z)</code> in projection space (either NDC or screen) is</p>
|
||||
<pre><code class="hljs"><span class="hljs-comment"># for perspective camera</span>
|
||||
<span class="hljs-attr">x</span> = fx * X / Z + px
|
||||
<span class="hljs-attr">y</span> = fy * Y / Z + py
|
||||
<span class="hljs-attr">z</span> = <span class="hljs-number">1</span> / Z
|
||||
|
||||
<span class="hljs-comment"># for orthographic</span>
|
||||
<span class="hljs-attr">x_ndc</span> = fx * X + px
|
||||
<span class="hljs-attr">y_ndc</span> = fy * Y + py
|
||||
<span class="hljs-attr">z_ndc</span> = Z
|
||||
<span class="hljs-comment"># for orthographic camera</span>
|
||||
<span class="hljs-attr">x</span> = fx * X + px
|
||||
<span class="hljs-attr">y</span> = fy * Y + py
|
||||
<span class="hljs-attr">z</span> = Z
|
||||
</code></pre>
|
||||
<p>Commonly, users have access to the focal length (<code>fx_screen</code>, <code>fy_screen</code>) and the principal point (<code>px_screen</code>, <code>py_screen</code>) in screen space. In that case, to construct the camera the user needs to additionally provide the <code>image_size = ((image_width, image_height),)</code>. More precisely, <code>camera = PerspectiveCameras(focal_length=((fx_screen, fy_screen),), principal_point=((px_screen, py_screen),), image_size = ((image_width, image_height),))</code>. Internally, the camera parameters are converted from screen to NDC as follows:</p>
|
||||
<pre><code class="hljs"><span class="hljs-attr">fx</span> = fx_screen * <span class="hljs-number">2.0</span> / image_width
|
||||
<span class="hljs-attr">fy</span> = fy_screen * <span class="hljs-number">2.0</span> / image_height
|
||||
<p>The user can define the camera parameters in NDC or in screen space. Screen space camera parameters are common and for that case the user needs to set <code>in_ndc</code> to <code>False</code> and also provide the <code>image_size=(height, width)</code> of the screen, aka the image.</p>
|
||||
<p>The <code>get_ndc_camera_transform</code> provides the transform from screen to NDC space in PyTorch3D. Note that the screen space assumes that the principal point is provided in the space with <code>+X left</code>, <code>+Y down</code> and origin at the top left corner of the image. To convert to NDC we need to account for the scaling of the normalized space as well as the change in <code>XY</code> direction.</p>
|
||||
<p>Below are example of equivalent <code>PerspectiveCameras</code> instantiations in NDC and screen space, respectively.</p>
|
||||
<pre><code class="hljs css language-python"><span class="hljs-comment"># NDC space camera</span>
|
||||
fcl_ndc = (<span class="hljs-number">1.2</span>,)
|
||||
prp_ndc = ((<span class="hljs-number">0.2</span>, <span class="hljs-number">0.5</span>),)
|
||||
cameras_ndc = PerspectiveCameras(focal_length=fcl_ndc, principal_point=prp_ndc)
|
||||
|
||||
<span class="hljs-attr">px</span> = - (px_screen - image_width / <span class="hljs-number">2.0</span>) * <span class="hljs-number">2.0</span> / image_width
|
||||
<span class="hljs-attr">py</span> = - (py_screen - image_height / <span class="hljs-number">2.0</span>) * <span class="hljs-number">2.0</span>/ image_height
|
||||
<span class="hljs-comment"># Screen space camera</span>
|
||||
image_size = ((<span class="hljs-number">128</span>, <span class="hljs-number">256</span>),) <span class="hljs-comment"># (h, w)</span>
|
||||
fcl_screen = (<span class="hljs-number">76.2</span>,) <span class="hljs-comment"># fcl_ndc * (min(image_size) - 1) / 2</span>
|
||||
prp_screen = ((<span class="hljs-number">114.8</span>, <span class="hljs-number">31.75</span>), ) <span class="hljs-comment"># (w - 1) / 2 - px_ndc * (min(image_size) - 1) / 2, (h - 1) / 2 - py_ndc * (min(image_size) - 1) / 2</span>
|
||||
cameras_screen = PerspectiveCameras(focal_length=fcl_screen, principal_point=prp_screen, in_ndc=<span class="hljs-literal">False</span>, image_size=image_size)
|
||||
</code></pre>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/renderer_getting_started"><span class="arrow-prev">← </span><span>Getting Started</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#camera-coordinate-systems">Camera Coordinate Systems</a></li><li><a href="#defining-cameras-in-pytorch3d">Defining Cameras in PyTorch3D</a><ul class="toc-headings"><li><a href="#camera-types">Camera Types</a></li></ul></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
<p>The relationship between screen and NDC specifications of a camera's <code>focal_length</code> and <code>principal_point</code> is given by the following equations, where <code>s = min(image_width, image_height)</code>.
|
||||
The transformation of x and y coordinates between screen and NDC is exactly the same as for px and py.</p>
|
||||
<pre><code class="hljs">fx_ndc = fx_screen * <span class="hljs-number">2.0</span> / (<span class="hljs-name">s</span> - <span class="hljs-number">1</span>)
|
||||
fy_ndc = fy_screen * <span class="hljs-number">2.0</span> / (<span class="hljs-name">s</span> - <span class="hljs-number">1</span>)
|
||||
|
||||
px_ndc = - (<span class="hljs-name">px_screen</span> - (<span class="hljs-name">image_width</span> - <span class="hljs-number">1</span>) / <span class="hljs-number">2.0</span>) * <span class="hljs-number">2.0</span> / (<span class="hljs-name">s</span> - <span class="hljs-number">1</span>)
|
||||
py_ndc = - (<span class="hljs-name">py_screen</span> - (<span class="hljs-name">image_height</span> - <span class="hljs-number">1</span>) / <span class="hljs-number">2.0</span>) * <span class="hljs-number">2.0</span> / (<span class="hljs-name">s</span> - <span class="hljs-number">1</span>)
|
||||
</code></pre>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Georgia Gkioxari</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/renderer_getting_started"><span class="arrow-prev">← </span><span>Getting Started</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#camera-coordinate-systems">Camera Coordinate Systems</a></li><li><a href="#defining-cameras-in-pytorch3d">Defining Cameras in PyTorch3D</a></li><li><a href="#interfacing-with-the-pytorch3d-renderer">Interfacing with the PyTorch3D Renderer</a><ul class="toc-headings"><li><a href="#camera-types">Camera Types</a></li></ul></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>cameras · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Cameras"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="cameras · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Cameras"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>cameras · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Cameras"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="cameras · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Cameras"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -71,45 +71,68 @@ This is the system the object/scene lives - the world.</li>
|
||||
<li><strong>Camera view coordinate system</strong>
|
||||
This is the system that has its origin on the image plane and the <code>Z</code>-axis perpendicular to the image plane. In PyTorch3D, we assume that <code>+X</code> points left, and <code>+Y</code> points up and <code>+Z</code> points out from the image plane. The transformation from world to view happens after applying a rotation (<code>R</code>) and translation (<code>T</code>).</li>
|
||||
<li><strong>NDC coordinate system</strong>
|
||||
This is the normalized coordinate system that confines in a volume the renderered part of the object/scene. Also known as view volume. Under the PyTorch3D convention, <code>(+1, +1, znear)</code> is the top left near corner, and <code>(-1, -1, zfar)</code> is the bottom right far corner of the volume. The transformation from view to NDC happens after applying the camera projection matrix (<code>P</code>).</li>
|
||||
This is the normalized coordinate system that confines in a volume the rendered part of the object/scene. Also known as view volume. Under the PyTorch3D convention, <code>(+1, +1, znear)</code> is the top left near corner, and <code>(-1, -1, zfar)</code> is the bottom right far corner of the volume. For non-square volumes, the side of the volume in <code>XY</code> with the smallest length ranges from <code>[-1, 1]</code> while the larger side from <code>[-s, s]</code>, where <code>s</code> is the aspect ratio and <code>s > 1</code> (larger divided by smaller side).
|
||||
The transformation from view to NDC happens after applying the camera projection matrix (<code>P</code>).</li>
|
||||
<li><strong>Screen coordinate system</strong>
|
||||
This is another representation of the view volume with the <code>XY</code> coordinates defined in pixel space instead of a normalized space.</li>
|
||||
</ul>
|
||||
<p>An illustration of the 4 coordinate systems is shown below
|
||||
<img src="https://user-images.githubusercontent.com/4369065/90317960-d9b8db80-dee1-11ea-8088-39c414b1e2fa.png" alt="cameras"></p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="defining-cameras-in-pytorch3d"></a><a href="#defining-cameras-in-pytorch3d" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Defining Cameras in PyTorch3D</h2>
|
||||
<p>Cameras in PyTorch3D transform an object/scene from world to NDC by first transforming the object/scene to view (via transforms <code>R</code> and <code>T</code>) and then projecting the 3D object/scene to NDC (via the projection matrix <code>P</code>, else known as camera matrix). Thus, the camera parameters in <code>P</code> are assumed to be in NDC space. If the user has camera parameters in screen space, which is a common use case, the parameters should transformed to NDC (see below for an example)</p>
|
||||
<p>We describe the camera types in PyTorch3D and the convention for the camera parameters provided at construction time.</p>
|
||||
<p>Cameras in PyTorch3D transform an object/scene from world to view by first transforming the object/scene to view (via transforms <code>R</code> and <code>T</code>) and then projecting the 3D object/scene to a normalized space via the projection matrix <code>P = K[R | T]</code>, where <code>K</code> is the intrinsic matrix. The camera parameters in <code>K</code> define the normalized space. If users define the camera parameters in NDC space, then the transform projects points to NDC. If the camera parameters are defined in screen space, the transformed points are in screen space.</p>
|
||||
<p>Note that the base <code>CamerasBase</code> class makes no assumptions about the coordinate systems. All the above transforms are geometric transforms defined purely by <code>R</code>, <code>T</code> and <code>K</code>. This means that users can define cameras in any coordinate system and for any transforms. The method <code>transform_points</code> will apply <code>K</code> , <code>R</code> and <code>T</code> to the input points as a simple matrix transformation. However, if users wish to use cameras with the PyTorch3D renderer, they need to abide to PyTorch3D's coordinate system assumptions (read below).</p>
|
||||
<p>We provide instantiations of common camera types in PyTorch3D and how users can flexibly define the projection space below.</p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="interfacing-with-the-pytorch3d-renderer"></a><a href="#interfacing-with-the-pytorch3d-renderer" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Interfacing with the PyTorch3D Renderer</h2>
|
||||
<p>The PyTorch3D renderer for both meshes and point clouds assumes that the camera transformed points, meaning the points passed as input to the rasterizer, are in PyTorch3D's NDC space. So to get the expected rendering outcome, users need to make sure that their 3D input data and cameras abide by these PyTorch3D coordinate system assumptions. The PyTorch3D coordinate system assumes <code>+X:left</code>, <code>+Y: up</code> and <code>+Z: from us to scene</code> (right-handed) . Confusions regarding coordinate systems are common so we advise that you spend some time understanding your data and the coordinate system they live in and transform them accordingly before using the PyTorch3D renderer.</p>
|
||||
<p>Examples of cameras and how they interface with the PyTorch3D renderer can be found in our tutorials.</p>
|
||||
<h3><a class="anchor" aria-hidden="true" id="camera-types"></a><a href="#camera-types" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Camera Types</h3>
|
||||
<p>All cameras inherit from <code>CamerasBase</code> which is a base class for all cameras. PyTorch3D provides four different camera types. The <code>CamerasBase</code> defines methods that are common to all camera models:</p>
|
||||
<ul>
|
||||
<li><code>get_camera_center</code> that returns the optical center of the camera in world coordinates</li>
|
||||
<li><code>get_world_to_view_transform</code> which returns a 3D transform from world coordinates to the camera view coordinates (R, T)</li>
|
||||
<li><code>get_full_projection_transform</code> which composes the projection transform (P) with the world-to-view transform (R, T)</li>
|
||||
<li><code>get_world_to_view_transform</code> which returns a 3D transform from world coordinates to the camera view coordinates <code>(R, T)</code></li>
|
||||
<li><code>get_full_projection_transform</code> which composes the projection transform (<code>K</code>) with the world-to-view transform <code>(R, T)</code></li>
|
||||
<li><code>transform_points</code> which takes a set of input points in world coordinates and projects to NDC coordinates ranging from [-1, -1, znear] to [+1, +1, zfar].</li>
|
||||
<li><code>get_ndc_camera_transform</code> which defines the conversion to PyTorch3D's NDC space and is called when interfacing with the PyTorch3D renderer. If the camera is defined in NDC space, then the identity transform is returned. If the cameras is defined in screen space, the conversion from screen to NDC is returned. If users define their own camera in screen space, they need to think of the screen to NDC conversion. We provide examples for the <code>PerspectiveCameras</code> and <code>OrthographicCameras</code>.</li>
|
||||
<li><code>transform_points_ndc</code> which takes a set of points in world coordinates and projects them to PyTorch3D's NDC space</li>
|
||||
<li><code>transform_points_screen</code> which takes a set of input points in world coordinates and projects them to the screen coordinates ranging from [0, 0, znear] to [W-1, H-1, zfar]</li>
|
||||
</ul>
|
||||
<p>Users can easily customize their own cameras. For each new camera, users should implement the <code>get_projection_transform</code> routine that returns the mapping <code>P</code> from camera view coordinates to NDC coordinates.</p>
|
||||
<h4><a class="anchor" aria-hidden="true" id="fovperspectivecameras-fovorthographiccameras"></a><a href="#fovperspectivecameras-fovorthographiccameras" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>FoVPerspectiveCameras, FoVOrthographicCameras</h4>
|
||||
<p>These two cameras follow the OpenGL convention for perspective and orthographic cameras respectively. The user provides the near <code>znear</code> and far <code>zfar</code> field which confines the view volume in the <code>Z</code> axis. The view volume in the <code>XY</code> plane is defined by field of view angle (<code>fov</code>) in the case of <code>FoVPerspectiveCameras</code> and by <code>min_x, min_y, max_x, max_y</code> in the case of <code>FoVOrthographicCameras</code>.</p>
|
||||
<p>These two cameras follow the OpenGL convention for perspective and orthographic cameras respectively. The user provides the near <code>znear</code> and far <code>zfar</code> field which confines the view volume in the <code>Z</code> axis. The view volume in the <code>XY</code> plane is defined by field of view angle (<code>fov</code>) in the case of <code>FoVPerspectiveCameras</code> and by <code>min_x, min_y, max_x, max_y</code> in the case of <code>FoVOrthographicCameras</code>.
|
||||
These cameras are by default in NDC space.</p>
|
||||
<h4><a class="anchor" aria-hidden="true" id="perspectivecameras-orthographiccameras"></a><a href="#perspectivecameras-orthographiccameras" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>PerspectiveCameras, OrthographicCameras</h4>
|
||||
<p>These two cameras follow the Multi-View Geometry convention for cameras. The user provides the focal length (<code>fx</code>, <code>fy</code>) and the principal point (<code>px</code>, <code>py</code>). For example, <code>camera = PerspectiveCameras(focal_length=((fx, fy),), principal_point=((px, py),))</code></p>
|
||||
<p>As mentioned above, the focal length and principal point are used to convert a point <code>(X, Y, Z)</code> from view coordinates to NDC coordinates, as follows</p>
|
||||
<pre><code class="hljs"><span class="hljs-comment"># for perspective</span>
|
||||
<span class="hljs-attr">x_ndc</span> = fx * X / Z + px
|
||||
<span class="hljs-attr">y_ndc</span> = fy * Y / Z + py
|
||||
<span class="hljs-attr">z_ndc</span> = <span class="hljs-number">1</span> / Z
|
||||
<p>The camera projection of a 3D point <code>(X, Y, Z)</code> in view coordinates to a point <code>(x, y, z)</code> in projection space (either NDC or screen) is</p>
|
||||
<pre><code class="hljs"><span class="hljs-comment"># for perspective camera</span>
|
||||
<span class="hljs-attr">x</span> = fx * X / Z + px
|
||||
<span class="hljs-attr">y</span> = fy * Y / Z + py
|
||||
<span class="hljs-attr">z</span> = <span class="hljs-number">1</span> / Z
|
||||
|
||||
<span class="hljs-comment"># for orthographic</span>
|
||||
<span class="hljs-attr">x_ndc</span> = fx * X + px
|
||||
<span class="hljs-attr">y_ndc</span> = fy * Y + py
|
||||
<span class="hljs-attr">z_ndc</span> = Z
|
||||
<span class="hljs-comment"># for orthographic camera</span>
|
||||
<span class="hljs-attr">x</span> = fx * X + px
|
||||
<span class="hljs-attr">y</span> = fy * Y + py
|
||||
<span class="hljs-attr">z</span> = Z
|
||||
</code></pre>
|
||||
<p>Commonly, users have access to the focal length (<code>fx_screen</code>, <code>fy_screen</code>) and the principal point (<code>px_screen</code>, <code>py_screen</code>) in screen space. In that case, to construct the camera the user needs to additionally provide the <code>image_size = ((image_width, image_height),)</code>. More precisely, <code>camera = PerspectiveCameras(focal_length=((fx_screen, fy_screen),), principal_point=((px_screen, py_screen),), image_size = ((image_width, image_height),))</code>. Internally, the camera parameters are converted from screen to NDC as follows:</p>
|
||||
<pre><code class="hljs"><span class="hljs-attr">fx</span> = fx_screen * <span class="hljs-number">2.0</span> / image_width
|
||||
<span class="hljs-attr">fy</span> = fy_screen * <span class="hljs-number">2.0</span> / image_height
|
||||
<p>The user can define the camera parameters in NDC or in screen space. Screen space camera parameters are common and for that case the user needs to set <code>in_ndc</code> to <code>False</code> and also provide the <code>image_size=(height, width)</code> of the screen, aka the image.</p>
|
||||
<p>The <code>get_ndc_camera_transform</code> provides the transform from screen to NDC space in PyTorch3D. Note that the screen space assumes that the principal point is provided in the space with <code>+X left</code>, <code>+Y down</code> and origin at the top left corner of the image. To convert to NDC we need to account for the scaling of the normalized space as well as the change in <code>XY</code> direction.</p>
|
||||
<p>Below are example of equivalent <code>PerspectiveCameras</code> instantiations in NDC and screen space, respectively.</p>
|
||||
<pre><code class="hljs css language-python"><span class="hljs-comment"># NDC space camera</span>
|
||||
fcl_ndc = (<span class="hljs-number">1.2</span>,)
|
||||
prp_ndc = ((<span class="hljs-number">0.2</span>, <span class="hljs-number">0.5</span>),)
|
||||
cameras_ndc = PerspectiveCameras(focal_length=fcl_ndc, principal_point=prp_ndc)
|
||||
|
||||
<span class="hljs-attr">px</span> = - (px_screen - image_width / <span class="hljs-number">2.0</span>) * <span class="hljs-number">2.0</span> / image_width
|
||||
<span class="hljs-attr">py</span> = - (py_screen - image_height / <span class="hljs-number">2.0</span>) * <span class="hljs-number">2.0</span>/ image_height
|
||||
<span class="hljs-comment"># Screen space camera</span>
|
||||
image_size = ((<span class="hljs-number">128</span>, <span class="hljs-number">256</span>),) <span class="hljs-comment"># (h, w)</span>
|
||||
fcl_screen = (<span class="hljs-number">76.2</span>,) <span class="hljs-comment"># fcl_ndc * (min(image_size) - 1) / 2</span>
|
||||
prp_screen = ((<span class="hljs-number">114.8</span>, <span class="hljs-number">31.75</span>), ) <span class="hljs-comment"># (w - 1) / 2 - px_ndc * (min(image_size) - 1) / 2, (h - 1) / 2 - py_ndc * (min(image_size) - 1) / 2</span>
|
||||
cameras_screen = PerspectiveCameras(focal_length=fcl_screen, principal_point=prp_screen, in_ndc=<span class="hljs-literal">False</span>, image_size=image_size)
|
||||
</code></pre>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/renderer_getting_started"><span class="arrow-prev">← </span><span>Getting Started</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#camera-coordinate-systems">Camera Coordinate Systems</a></li><li><a href="#defining-cameras-in-pytorch3d">Defining Cameras in PyTorch3D</a><ul class="toc-headings"><li><a href="#camera-types">Camera Types</a></li></ul></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
<p>The relationship between screen and NDC specifications of a camera's <code>focal_length</code> and <code>principal_point</code> is given by the following equations, where <code>s = min(image_width, image_height)</code>.
|
||||
The transformation of x and y coordinates between screen and NDC is exactly the same as for px and py.</p>
|
||||
<pre><code class="hljs">fx_ndc = fx_screen * <span class="hljs-number">2.0</span> / (<span class="hljs-name">s</span> - <span class="hljs-number">1</span>)
|
||||
fy_ndc = fy_screen * <span class="hljs-number">2.0</span> / (<span class="hljs-name">s</span> - <span class="hljs-number">1</span>)
|
||||
|
||||
px_ndc = - (<span class="hljs-name">px_screen</span> - (<span class="hljs-name">image_width</span> - <span class="hljs-number">1</span>) / <span class="hljs-number">2.0</span>) * <span class="hljs-number">2.0</span> / (<span class="hljs-name">s</span> - <span class="hljs-number">1</span>)
|
||||
py_ndc = - (<span class="hljs-name">py_screen</span> - (<span class="hljs-name">image_height</span> - <span class="hljs-number">1</span>) / <span class="hljs-number">2.0</span>) * <span class="hljs-number">2.0</span> / (<span class="hljs-name">s</span> - <span class="hljs-number">1</span>)
|
||||
</code></pre>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Georgia Gkioxari</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/renderer_getting_started"><span class="arrow-prev">← </span><span>Getting Started</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#camera-coordinate-systems">Camera Coordinate Systems</a></li><li><a href="#defining-cameras-in-pytorch3d">Defining Cameras in PyTorch3D</a></li><li><a href="#interfacing-with-the-pytorch3d-renderer">Interfacing with the PyTorch3D Renderer</a><ul class="toc-headings"><li><a href="#camera-types">Camera Types</a></li></ul></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>cubify · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Cubify"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="cubify · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Cubify"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>cubify · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Cubify"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="cubify · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Cubify"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -66,4 +66,4 @@
|
||||
<p>The <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/cubify.py">cubify operator</a> converts an 3D occupancy grid of shape <code>BxDxHxW</code>, where <code>B</code> is the batch size, into a mesh instantiated as a <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py">Meshes</a> data structure of <code>B</code> elements. The operator replaces every occupied voxel (if its occupancy probability is greater than a user defined threshold) with a cuboid of 12 faces and 8 vertices. Shared vertices are merged, and internal faces are removed resulting in a <strong>watertight</strong> mesh.</p>
|
||||
<p>The operator provides three alignment modes {<em>topleft</em>, <em>corner</em>, <em>center</em>} which define the span of the mesh vertices with respect to the voxel grid. The alignment modes are described in the figure below for a 2D grid.</p>
|
||||
<p><img src="https://user-images.githubusercontent.com/4369065/81032959-af697380-8e46-11ea-91a8-fae89597f988.png" alt="input"></p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/batching"><span class="arrow-prev">← </span><span>Batching</span></a><a class="docs-next button" href="/docs/visualization"><span>Plotly Visualization</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/batching"><span class="arrow-prev">← </span><span>Batching</span></a><a class="docs-next button" href="/docs/visualization"><span>Plotly Visualization</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>cubify · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Cubify"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="cubify · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Cubify"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>cubify · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Cubify"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="cubify · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Cubify"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -66,4 +66,4 @@
|
||||
<p>The <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/cubify.py">cubify operator</a> converts an 3D occupancy grid of shape <code>BxDxHxW</code>, where <code>B</code> is the batch size, into a mesh instantiated as a <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py">Meshes</a> data structure of <code>B</code> elements. The operator replaces every occupied voxel (if its occupancy probability is greater than a user defined threshold) with a cuboid of 12 faces and 8 vertices. Shared vertices are merged, and internal faces are removed resulting in a <strong>watertight</strong> mesh.</p>
|
||||
<p>The operator provides three alignment modes {<em>topleft</em>, <em>corner</em>, <em>center</em>} which define the span of the mesh vertices with respect to the voxel grid. The alignment modes are described in the figure below for a 2D grid.</p>
|
||||
<p><img src="https://user-images.githubusercontent.com/4369065/81032959-af697380-8e46-11ea-91a8-fae89597f988.png" alt="input"></p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/batching"><span class="arrow-prev">← </span><span>Batching</span></a><a class="docs-next button" href="/docs/visualization"><span>Plotly Visualization</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/batching"><span class="arrow-prev">← </span><span>Batching</span></a><a class="docs-next button" href="/docs/visualization"><span>Plotly Visualization</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>datasets · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Data loaders for common 3D Datasets"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="datasets · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Data loaders for common 3D Datasets"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>datasets · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Data loaders for common 3D Datasets"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="datasets · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Data loaders for common 3D Datasets"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -70,4 +70,4 @@
|
||||
<h3><a class="anchor" aria-hidden="true" id="r2n2"></a><a href="#r2n2" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>R2N2</h3>
|
||||
<p>The R2N2 dataset contains 13 categories that are a subset of the ShapeNetCore v.1 dataset. The R2N2 dataset also contains its own 24 renderings of each object and voxelized models. The R2N2 Dataset can be downloaded following the instructions <a href="http://3d-r2n2.stanford.edu/">here</a>.</p>
|
||||
<p>The PyTorch3D <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/datasets/r2n2/r2n2.py">R2N2 data loader</a> is initialized with the paths to the ShapeNet dataset, the R2N2 dataset and the splits file for R2N2. Just like <code>ShapeNetCore</code>, it can be passed to <code>torch.utils.data.DataLoader</code> with a customized collate_fn: <code>collate_batched_R2N2</code> from the <code>pytorch3d.dataset.r2n2.utils</code> module. It returns all the data that <code>ShapeNetCore</code> returns, and in addition, it returns the R2N2 renderings (24 views for each model) along with the camera calibration matrices and a voxel representation for each model. Similar to <code>ShapeNetCore</code>, it has a customized <code>render</code> function that supports rendering specified models with the PyTorch3D differentiable renderer. In addition, it supports rendering models with the same orientations as R2N2's original renderings.</p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/meshes_io"><span class="arrow-prev">← </span><span>Loading from file</span></a><a class="docs-next button" href="/docs/batching"><span>Batching</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/meshes_io"><span class="arrow-prev">← </span><span>Loading from file</span></a><a class="docs-next button" href="/docs/batching"><span>Batching</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>datasets · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Data loaders for common 3D Datasets"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="datasets · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Data loaders for common 3D Datasets"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>datasets · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Data loaders for common 3D Datasets"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="datasets · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Data loaders for common 3D Datasets"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -70,4 +70,4 @@
|
||||
<h3><a class="anchor" aria-hidden="true" id="r2n2"></a><a href="#r2n2" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>R2N2</h3>
|
||||
<p>The R2N2 dataset contains 13 categories that are a subset of the ShapeNetCore v.1 dataset. The R2N2 dataset also contains its own 24 renderings of each object and voxelized models. The R2N2 Dataset can be downloaded following the instructions <a href="http://3d-r2n2.stanford.edu/">here</a>.</p>
|
||||
<p>The PyTorch3D <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/datasets/r2n2/r2n2.py">R2N2 data loader</a> is initialized with the paths to the ShapeNet dataset, the R2N2 dataset and the splits file for R2N2. Just like <code>ShapeNetCore</code>, it can be passed to <code>torch.utils.data.DataLoader</code> with a customized collate_fn: <code>collate_batched_R2N2</code> from the <code>pytorch3d.dataset.r2n2.utils</code> module. It returns all the data that <code>ShapeNetCore</code> returns, and in addition, it returns the R2N2 renderings (24 views for each model) along with the camera calibration matrices and a voxel representation for each model. Similar to <code>ShapeNetCore</code>, it has a customized <code>render</code> function that supports rendering specified models with the PyTorch3D differentiable renderer. In addition, it supports rendering models with the same orientations as R2N2's original renderings.</p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/meshes_io"><span class="arrow-prev">← </span><span>Loading from file</span></a><a class="docs-next button" href="/docs/batching"><span>Batching</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Nikhila Ravi</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/meshes_io"><span class="arrow-prev">← </span><span>Loading from file</span></a><a class="docs-next button" href="/docs/batching"><span>Batching</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
30
docs/io.html
Normal file
30
docs/io.html
Normal file
@@ -0,0 +1,30 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>io · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# File IO"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="io · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# File IO"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
|
||||
|
||||
ga('create', 'UA-157376881-1', 'auto');
|
||||
ga('send', 'pageview');
|
||||
</script><script type="text/javascript" src="https://buttons.github.io/buttons.js"></script><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/pytorch3dfavicon.png" alt="PyTorch3D"/><h2 class="headerTitleWithLogo">PyTorch3D</h2></a><div class="navigationWrapper navigationSlider"><nav class="slidingNav"><ul class="nav-site nav-site-internal"><li class=""><a href="/docs/why_pytorch3d" target="_self">Docs</a></li><li class=""><a href="/tutorials" target="_self">Tutorials</a></li><li class=""><a href="https://pytorch3d.readthedocs.io/" target="_self">API</a></li><li class=""><a href="https://github.com/facebookresearch/pytorch3d" target="_self">GitHub</a></li></ul></nav></div></header></div></div><div class="navPusher"><div class="docMainWrapper wrapper"><div class="container mainContainer docsContainer"><div class="wrapper"><div class="post"><header class="postHeader"></header><article><div><span><h1><a class="anchor" aria-hidden="true" id="file-io"></a><a href="#file-io" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>File IO</h1>
|
||||
<p>There is a flexible interface for loading and saving point clouds and meshes from different formats.</p>
|
||||
<p>The main usage is via the <code>pytorch3d.io.IO</code> object, and its methods
|
||||
<code>load_mesh</code>, <code>save_mesh</code>, <code>load_point_cloud</code> and <code>save_point_cloud</code>.</p>
|
||||
<p>For example, to load a mesh you might do</p>
|
||||
<pre><code class="hljs">from pytorch3d.io import IO
|
||||
|
||||
device=torch.device(<span class="hljs-string">"cuda:0"</span>)
|
||||
mesh = <span class="hljs-constructor">IO()</span>.load<span class="hljs-constructor">_mesh(<span class="hljs-string">"mymesh.ply"</span>, <span class="hljs-params">device</span>=<span class="hljs-params">device</span>)</span>
|
||||
</code></pre>
|
||||
<p>and to save a pointcloud you might do</p>
|
||||
<pre><code class="hljs">pcl = <span class="hljs-constructor">Pointclouds(<span class="hljs-operator">...</span>)</span>
|
||||
<span class="hljs-constructor">IO()</span>.save<span class="hljs-constructor">_point_cloud(<span class="hljs-params">pcl</span>, <span class="hljs-string">"output_pointcloud.obj"</span>)</span>
|
||||
</code></pre>
|
||||
<p>For meshes, this supports OBJ, PLY and OFF files.</p>
|
||||
<p>For pointclouds, this supports PLY files.</p>
|
||||
<p>In addition, there is experimental support for loading meshes from
|
||||
<a href="https://github.com/KhronosGroup/glTF/tree/master/specification/2.0">glTF 2 assets</a>
|
||||
stored either in a GLB container file or a glTF JSON file with embedded binary data.
|
||||
This must be enabled explicitly, as described in
|
||||
<code>pytorch3d/io/experimental_gltf_io.ply</code>.</p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Jeremy Reizenstein</em></div><div class="docs-prevnext"></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
30
docs/io/index.html
Normal file
30
docs/io/index.html
Normal file
@@ -0,0 +1,30 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>io · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# File IO"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="io · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# File IO"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
|
||||
|
||||
ga('create', 'UA-157376881-1', 'auto');
|
||||
ga('send', 'pageview');
|
||||
</script><script type="text/javascript" src="https://buttons.github.io/buttons.js"></script><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/pytorch3dfavicon.png" alt="PyTorch3D"/><h2 class="headerTitleWithLogo">PyTorch3D</h2></a><div class="navigationWrapper navigationSlider"><nav class="slidingNav"><ul class="nav-site nav-site-internal"><li class=""><a href="/docs/why_pytorch3d" target="_self">Docs</a></li><li class=""><a href="/tutorials" target="_self">Tutorials</a></li><li class=""><a href="https://pytorch3d.readthedocs.io/" target="_self">API</a></li><li class=""><a href="https://github.com/facebookresearch/pytorch3d" target="_self">GitHub</a></li></ul></nav></div></header></div></div><div class="navPusher"><div class="docMainWrapper wrapper"><div class="container mainContainer docsContainer"><div class="wrapper"><div class="post"><header class="postHeader"></header><article><div><span><h1><a class="anchor" aria-hidden="true" id="file-io"></a><a href="#file-io" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>File IO</h1>
|
||||
<p>There is a flexible interface for loading and saving point clouds and meshes from different formats.</p>
|
||||
<p>The main usage is via the <code>pytorch3d.io.IO</code> object, and its methods
|
||||
<code>load_mesh</code>, <code>save_mesh</code>, <code>load_point_cloud</code> and <code>save_point_cloud</code>.</p>
|
||||
<p>For example, to load a mesh you might do</p>
|
||||
<pre><code class="hljs">from pytorch3d.io import IO
|
||||
|
||||
device=torch.device(<span class="hljs-string">"cuda:0"</span>)
|
||||
mesh = <span class="hljs-constructor">IO()</span>.load<span class="hljs-constructor">_mesh(<span class="hljs-string">"mymesh.ply"</span>, <span class="hljs-params">device</span>=<span class="hljs-params">device</span>)</span>
|
||||
</code></pre>
|
||||
<p>and to save a pointcloud you might do</p>
|
||||
<pre><code class="hljs">pcl = <span class="hljs-constructor">Pointclouds(<span class="hljs-operator">...</span>)</span>
|
||||
<span class="hljs-constructor">IO()</span>.save<span class="hljs-constructor">_point_cloud(<span class="hljs-params">pcl</span>, <span class="hljs-string">"output_pointcloud.obj"</span>)</span>
|
||||
</code></pre>
|
||||
<p>For meshes, this supports OBJ, PLY and OFF files.</p>
|
||||
<p>For pointclouds, this supports PLY files.</p>
|
||||
<p>In addition, there is experimental support for loading meshes from
|
||||
<a href="https://github.com/KhronosGroup/glTF/tree/master/specification/2.0">glTF 2 assets</a>
|
||||
stored either in a GLB container file or a glTF JSON file with embedded binary data.
|
||||
This must be enabled explicitly, as described in
|
||||
<code>pytorch3d/io/experimental_gltf_io.ply</code>.</p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Jeremy Reizenstein</em></div><div class="docs-prevnext"></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>meshes_io · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Meshes and IO"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="meshes_io · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Meshes and IO"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>meshes_io · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Meshes and IO"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="meshes_io · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Meshes and IO"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -64,12 +64,12 @@
|
||||
});
|
||||
</script></nav></div><div class="container mainContainer docsContainer"><div class="wrapper"><div class="post"><header class="postHeader"></header><article><div><span><h1><a class="anchor" aria-hidden="true" id="meshes-and-io"></a><a href="#meshes-and-io" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Meshes and IO</h1>
|
||||
<p>The Meshes object represents a batch of triangulated meshes, and is central to
|
||||
much of the functionality of pytorch3d. There is no insistence that each mesh in
|
||||
much of the functionality of PyTorch3D. There is no insistence that each mesh in
|
||||
the batch has the same number of vertices or faces. When available, it can store
|
||||
other data which pertains to the mesh, for example face normals, face areas
|
||||
and textures.</p>
|
||||
<p>Two common file formats for storing single meshes are ".obj" and ".ply" files,
|
||||
and pytorch3d has functions for reading these.</p>
|
||||
and PyTorch3D has functions for reading these.</p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="obj"></a><a href="#obj" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>OBJ</h2>
|
||||
<p>Obj files have a standard way to store extra information about a mesh. Given an
|
||||
obj file, it can be read with</p>
|
||||
@@ -105,7 +105,7 @@ entire mesh e.g.</p>
|
||||
</code></pre>
|
||||
<p>The <code>load_objs_as_meshes</code> function provides this procedure.</p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="ply"></a><a href="#ply" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>PLY</h2>
|
||||
<p>Ply files are flexible in the way they store additional information, pytorch3d
|
||||
<p>Ply files are flexible in the way they store additional information. PyTorch3D
|
||||
provides a function just to read the vertices and faces from a ply file.
|
||||
The call</p>
|
||||
<pre><code class="hljs"> verts, faces = load<span class="hljs-constructor">_ply(<span class="hljs-params">filename</span>)</span>
|
||||
@@ -116,4 +116,4 @@ are not triangles will be split into triangles. A Meshes object containing a
|
||||
single mesh can be created from this data using</p>
|
||||
<pre><code class="hljs"> meshes = <span class="hljs-constructor">Meshes(<span class="hljs-params">verts</span>=[<span class="hljs-params">verts</span>], <span class="hljs-params">faces</span>=[<span class="hljs-params">faces</span>])</span>
|
||||
</code></pre>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Jeremy Reizenstein</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/why_pytorch3d"><span class="arrow-prev">← </span><span class="function-name-prevnext">Why PyTorch3D</span></a><a class="docs-next button" href="/docs/datasets"><span>Data loaders</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#obj">OBJ</a></li><li><a href="#ply">PLY</a></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Jeremy Reizenstein</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/why_pytorch3d"><span class="arrow-prev">← </span><span class="function-name-prevnext">Why PyTorch3D</span></a><a class="docs-next button" href="/docs/datasets"><span>Data loaders</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#obj">OBJ</a></li><li><a href="#ply">PLY</a></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>meshes_io · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Meshes and IO"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="meshes_io · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Meshes and IO"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>meshes_io · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Meshes and IO"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="meshes_io · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Meshes and IO"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -64,12 +64,12 @@
|
||||
});
|
||||
</script></nav></div><div class="container mainContainer docsContainer"><div class="wrapper"><div class="post"><header class="postHeader"></header><article><div><span><h1><a class="anchor" aria-hidden="true" id="meshes-and-io"></a><a href="#meshes-and-io" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Meshes and IO</h1>
|
||||
<p>The Meshes object represents a batch of triangulated meshes, and is central to
|
||||
much of the functionality of pytorch3d. There is no insistence that each mesh in
|
||||
much of the functionality of PyTorch3D. There is no insistence that each mesh in
|
||||
the batch has the same number of vertices or faces. When available, it can store
|
||||
other data which pertains to the mesh, for example face normals, face areas
|
||||
and textures.</p>
|
||||
<p>Two common file formats for storing single meshes are ".obj" and ".ply" files,
|
||||
and pytorch3d has functions for reading these.</p>
|
||||
and PyTorch3D has functions for reading these.</p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="obj"></a><a href="#obj" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>OBJ</h2>
|
||||
<p>Obj files have a standard way to store extra information about a mesh. Given an
|
||||
obj file, it can be read with</p>
|
||||
@@ -105,7 +105,7 @@ entire mesh e.g.</p>
|
||||
</code></pre>
|
||||
<p>The <code>load_objs_as_meshes</code> function provides this procedure.</p>
|
||||
<h2><a class="anchor" aria-hidden="true" id="ply"></a><a href="#ply" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>PLY</h2>
|
||||
<p>Ply files are flexible in the way they store additional information, pytorch3d
|
||||
<p>Ply files are flexible in the way they store additional information. PyTorch3D
|
||||
provides a function just to read the vertices and faces from a ply file.
|
||||
The call</p>
|
||||
<pre><code class="hljs"> verts, faces = load<span class="hljs-constructor">_ply(<span class="hljs-params">filename</span>)</span>
|
||||
@@ -116,4 +116,4 @@ are not triangles will be split into triangles. A Meshes object containing a
|
||||
single mesh can be created from this data using</p>
|
||||
<pre><code class="hljs"> meshes = <span class="hljs-constructor">Meshes(<span class="hljs-params">verts</span>=[<span class="hljs-params">verts</span>], <span class="hljs-params">faces</span>=[<span class="hljs-params">faces</span>])</span>
|
||||
</code></pre>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Jeremy Reizenstein</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/why_pytorch3d"><span class="arrow-prev">← </span><span class="function-name-prevnext">Why PyTorch3D</span></a><a class="docs-next button" href="/docs/datasets"><span>Data loaders</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#obj">OBJ</a></li><li><a href="#ply">PLY</a></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Jeremy Reizenstein</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/why_pytorch3d"><span class="arrow-prev">← </span><span class="function-name-prevnext">Why PyTorch3D</span></a><a class="docs-next button" href="/docs/datasets"><span>Data loaders</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#obj">OBJ</a></li><li><a href="#ply">PLY</a></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>renderer · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Rendering Overview"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="renderer · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Rendering Overview"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>renderer · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Rendering Overview"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="renderer · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Rendering Overview"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -109,4 +109,4 @@ total_memory = memory_forward_pass + memory_backward_pass
|
||||
<p><a id="6">[6]</a> Yifan et al, 'Differentiable Surface Splatting for Point-based Geometry Processing', SIGGRAPH Asia 2019</p>
|
||||
<p><a id="7">[7]</a> Loubet et al, 'Reparameterizing Discontinuous Integrands for Differentiable Rendering', SIGGRAPH Asia 2019</p>
|
||||
<p><a id="8">[8]</a> Chen et al, 'Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer', NeurIPS 2019</p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Christoph Lassner</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/visualization"><span class="arrow-prev">← </span><span>Plotly Visualization</span></a><a class="docs-next button" href="/docs/renderer_getting_started"><span>Getting Started</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#uget-startedu"><u>Get started</u></a></li><li><a href="#utech-reportu"><u>Tech Report</u></a><ul class="toc-headings"><li><a href="#references">References</a></li></ul></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Christoph Lassner</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/visualization"><span class="arrow-prev">← </span><span>Plotly Visualization</span></a><a class="docs-next button" href="/docs/renderer_getting_started"><span>Getting Started</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#uget-startedu"><u>Get started</u></a></li><li><a href="#utech-reportu"><u>Tech Report</u></a><ul class="toc-headings"><li><a href="#references">References</a></li></ul></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>renderer · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Rendering Overview"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="renderer · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Rendering Overview"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>renderer · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Rendering Overview"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="renderer · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Rendering Overview"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -109,4 +109,4 @@ total_memory = memory_forward_pass + memory_backward_pass
|
||||
<p><a id="6">[6]</a> Yifan et al, 'Differentiable Surface Splatting for Point-based Geometry Processing', SIGGRAPH Asia 2019</p>
|
||||
<p><a id="7">[7]</a> Loubet et al, 'Reparameterizing Discontinuous Integrands for Differentiable Rendering', SIGGRAPH Asia 2019</p>
|
||||
<p><a id="8">[8]</a> Chen et al, 'Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer', NeurIPS 2019</p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Christoph Lassner</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/visualization"><span class="arrow-prev">← </span><span>Plotly Visualization</span></a><a class="docs-next button" href="/docs/renderer_getting_started"><span>Getting Started</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#uget-startedu"><u>Get started</u></a></li><li><a href="#utech-reportu"><u>Tech Report</u></a><ul class="toc-headings"><li><a href="#references">References</a></li></ul></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Christoph Lassner</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/visualization"><span class="arrow-prev">← </span><span>Plotly Visualization</span></a><a class="docs-next button" href="/docs/renderer_getting_started"><span>Getting Started</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#uget-startedu"><u>Get started</u></a></li><li><a href="#utech-reportu"><u>Tech Report</u></a><ul class="toc-headings"><li><a href="#references">References</a></li></ul></li></ul></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>renderer_getting_started · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Getting Started With Renderer"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="renderer_getting_started · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Getting Started With Renderer"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>renderer_getting_started · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Getting Started With Renderer"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="renderer_getting_started · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Getting Started With Renderer"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -94,6 +94,14 @@ giving the barycentric coordinates in NDC units of the nearest faces at each pix
|
||||
</ul>
|
||||
<p><img align="center" src="assets/opengl_coordframes.png" width="300"></p>
|
||||
<hr>
|
||||
<h3><a class="anchor" aria-hidden="true" id="rasterizing-non-square-images"></a><a href="#rasterizing-non-square-images" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Rasterizing Non Square Images</h3>
|
||||
<p>To rasterize an image where H != W, you can specify the <code>image_size</code> in the <code>RasterizationSettings</code> as a tuple of (H, W).</p>
|
||||
<p>The aspect ratio needs special consideration. There are two aspect ratios to be aware of:
|
||||
- the aspect ratio of each pixel
|
||||
- the aspect ratio of the output image
|
||||
In the cameras e.g. <code>FoVPerspectiveCameras</code>, the <code>aspect_ratio</code> argument can be used to set the pixel aspect ratio. In the rasterizer, we assume square pixels, but variable image aspect ratio (i.e rectangle images).</p>
|
||||
<p>In most cases you will want to set the camera aspect ratio to 1.0 (i.e. square pixels) and only vary the <code>image_size</code> in the <code>RasterizationSettings</code>(i.e. the output image dimensions in pixels).</p>
|
||||
<hr>
|
||||
<h3><a class="anchor" aria-hidden="true" id="the-pulsar-backend"></a><a href="#the-pulsar-backend" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>The pulsar backend</h3>
|
||||
<p>Since v0.3, <a href="https://arxiv.org/abs/2004.07484">pulsar</a> can be used as a backend for point-rendering. It has a focus on efficiency, which comes with pros and cons: it is highly optimized and all rendering stages are integrated in the CUDA kernels. This leads to significantly higher speed and better scaling behavior. We use it at Facebook Reality Labs to render and optimize scenes with millions of spheres in resolutions up to 4K. You can find a runtime comparison plot below (settings: <code>bin_size=None</code>, <code>points_per_pixel=5</code>, <code>image_size=1024</code>, <code>radius=1e-2</code>, <code>composite_params.radius=1e-4</code>; benchmarked on an RTX 2070 GPU).</p>
|
||||
<p><img align="center" src="assets/pulsar_bm.png" width="300"></p>
|
||||
@@ -104,9 +112,10 @@ giving the barycentric coordinates in NDC units of the nearest faces at each pix
|
||||
<ol>
|
||||
<li><strong>Vertex Textures</strong>: D dimensional textures for each vertex (for example an RGB color) which can be interpolated across the face. This can be represented as an <code>(N, V, D)</code> tensor. This is a fairly simple representation though and cannot model complex textures if the mesh faces are large.</li>
|
||||
<li><strong>UV Textures</strong>: vertex UV coordinates and <strong>one</strong> texture map for the whole mesh. For a point on a face with given barycentric coordinates, the face color can be computed by interpolating the vertex uv coordinates and then sampling from the texture map. This representation requires two tensors (UVs: <code>(N, V, 2), Texture map:</code>(N, H, W, 3)`), and is limited to only support one texture map per mesh.</li>
|
||||
<li><strong>Face Textures</strong>: In more complex cases such as ShapeNet meshes, there are multiple texture maps per mesh and some faces have texture while other do not. For these cases, a more flexible representation is a texture atlas, where each face is represented as an <code>(RxR)</code> texture map where R is the texture resolution. For a given point on the face, the texture value can be sampled from the per face texture map using the barycentric coordinates of the point. This representation requires one tensor of shape <code>(N, F, R, R, 3)</code>. This texturing method is inspired by the SoftRasterizer implementation. For more details refer to the <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/io/mtl_io.py#L123"><code>make_material_atlas</code></a> and <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/renderer/mesh/textures.py#L452"><code>sample_textures</code></a> functions.</li>
|
||||
<li><strong>Face Textures</strong>: In more complex cases such as ShapeNet meshes, there are multiple texture maps per mesh and some faces have texture while other do not. For these cases, a more flexible representation is a texture atlas, where each face is represented as an <code>(RxR)</code> texture map where R is the texture resolution. For a given point on the face, the texture value can be sampled from the per face texture map using the barycentric coordinates of the point. This representation requires one tensor of shape <code>(N, F, R, R, 3)</code>. This texturing method is inspired by the SoftRasterizer implementation. For more details refer to the <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/io/mtl_io.py#L123"><code>make_material_atlas</code></a> and <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/renderer/mesh/textures.py#L452"><code>sample_textures</code></a> functions. <strong>NOTE:</strong>: The <code>TexturesAtlas</code> texture sampling is only differentiable with respect to the texture atlas but not differentiable with respect to the barycentric coordinates.</li>
|
||||
</ol>
|
||||
<p><img src="assets/texturing.jpg" width="1000"></p>
|
||||
<hr>
|
||||
<h3><a class="anchor" aria-hidden="true" id="a-simple-renderer"></a><a href="#a-simple-renderer" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>A simple renderer</h3>
|
||||
<p>A renderer in PyTorch3D is composed of a <strong>rasterizer</strong> and a <strong>shader</strong>. Create a renderer in a few simple steps:</p>
|
||||
<pre><code class="hljs"><span class="hljs-comment"># Imports</span>
|
||||
@@ -129,13 +138,14 @@ raster_settings = RasterizationSettings(
|
||||
<span class="hljs-attribute">faces_per_pixel</span>=1,
|
||||
)
|
||||
|
||||
<span class="hljs-comment"># Create a phong renderer by composing a rasterizer and a shader. Here we can use a predefined</span>
|
||||
<span class="hljs-comment"># Create a Phong renderer by composing a rasterizer and a shader. Here we can use a predefined</span>
|
||||
<span class="hljs-comment"># PhongShader, passing in the device on which to initialize the default parameters</span>
|
||||
renderer = MeshRenderer(
|
||||
<span class="hljs-attribute">rasterizer</span>=MeshRasterizer(cameras=cameras, <span class="hljs-attribute">raster_settings</span>=raster_settings),
|
||||
<span class="hljs-attribute">shader</span>=HardPhongShader(device=device, <span class="hljs-attribute">cameras</span>=cameras)
|
||||
)
|
||||
</code></pre>
|
||||
<hr>
|
||||
<h3><a class="anchor" aria-hidden="true" id="a-custom-shader"></a><a href="#a-custom-shader" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>A custom shader</h3>
|
||||
<p>Shaders are the most flexible part of the PyTorch3D rendering API. We have created some examples of shaders in <code>shaders.py</code> but this is a non exhaustive set.</p>
|
||||
<p>A shader can incorporate several steps:</p>
|
||||
@@ -158,4 +168,4 @@ renderer = MeshRenderer(
|
||||
<tr><td>SoftSilhouetteShader</td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center">✔️</td></tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Christoph Lassner</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/renderer"><span class="arrow-prev">← </span><span>Overview</span></a><a class="docs-next button" href="/docs/cameras"><span>Cameras</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Jeremy Reizenstein</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/renderer"><span class="arrow-prev">← </span><span>Overview</span></a><a class="docs-next button" href="/docs/cameras"><span>Cameras</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>renderer_getting_started · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Getting Started With Renderer"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="renderer_getting_started · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Getting Started With Renderer"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>renderer_getting_started · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Getting Started With Renderer"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="renderer_getting_started · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Getting Started With Renderer"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -94,6 +94,14 @@ giving the barycentric coordinates in NDC units of the nearest faces at each pix
|
||||
</ul>
|
||||
<p><img align="center" src="assets/opengl_coordframes.png" width="300"></p>
|
||||
<hr>
|
||||
<h3><a class="anchor" aria-hidden="true" id="rasterizing-non-square-images"></a><a href="#rasterizing-non-square-images" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Rasterizing Non Square Images</h3>
|
||||
<p>To rasterize an image where H != W, you can specify the <code>image_size</code> in the <code>RasterizationSettings</code> as a tuple of (H, W).</p>
|
||||
<p>The aspect ratio needs special consideration. There are two aspect ratios to be aware of:
|
||||
- the aspect ratio of each pixel
|
||||
- the aspect ratio of the output image
|
||||
In the cameras e.g. <code>FoVPerspectiveCameras</code>, the <code>aspect_ratio</code> argument can be used to set the pixel aspect ratio. In the rasterizer, we assume square pixels, but variable image aspect ratio (i.e rectangle images).</p>
|
||||
<p>In most cases you will want to set the camera aspect ratio to 1.0 (i.e. square pixels) and only vary the <code>image_size</code> in the <code>RasterizationSettings</code>(i.e. the output image dimensions in pixels).</p>
|
||||
<hr>
|
||||
<h3><a class="anchor" aria-hidden="true" id="the-pulsar-backend"></a><a href="#the-pulsar-backend" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>The pulsar backend</h3>
|
||||
<p>Since v0.3, <a href="https://arxiv.org/abs/2004.07484">pulsar</a> can be used as a backend for point-rendering. It has a focus on efficiency, which comes with pros and cons: it is highly optimized and all rendering stages are integrated in the CUDA kernels. This leads to significantly higher speed and better scaling behavior. We use it at Facebook Reality Labs to render and optimize scenes with millions of spheres in resolutions up to 4K. You can find a runtime comparison plot below (settings: <code>bin_size=None</code>, <code>points_per_pixel=5</code>, <code>image_size=1024</code>, <code>radius=1e-2</code>, <code>composite_params.radius=1e-4</code>; benchmarked on an RTX 2070 GPU).</p>
|
||||
<p><img align="center" src="assets/pulsar_bm.png" width="300"></p>
|
||||
@@ -104,9 +112,10 @@ giving the barycentric coordinates in NDC units of the nearest faces at each pix
|
||||
<ol>
|
||||
<li><strong>Vertex Textures</strong>: D dimensional textures for each vertex (for example an RGB color) which can be interpolated across the face. This can be represented as an <code>(N, V, D)</code> tensor. This is a fairly simple representation though and cannot model complex textures if the mesh faces are large.</li>
|
||||
<li><strong>UV Textures</strong>: vertex UV coordinates and <strong>one</strong> texture map for the whole mesh. For a point on a face with given barycentric coordinates, the face color can be computed by interpolating the vertex uv coordinates and then sampling from the texture map. This representation requires two tensors (UVs: <code>(N, V, 2), Texture map:</code>(N, H, W, 3)`), and is limited to only support one texture map per mesh.</li>
|
||||
<li><strong>Face Textures</strong>: In more complex cases such as ShapeNet meshes, there are multiple texture maps per mesh and some faces have texture while other do not. For these cases, a more flexible representation is a texture atlas, where each face is represented as an <code>(RxR)</code> texture map where R is the texture resolution. For a given point on the face, the texture value can be sampled from the per face texture map using the barycentric coordinates of the point. This representation requires one tensor of shape <code>(N, F, R, R, 3)</code>. This texturing method is inspired by the SoftRasterizer implementation. For more details refer to the <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/io/mtl_io.py#L123"><code>make_material_atlas</code></a> and <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/renderer/mesh/textures.py#L452"><code>sample_textures</code></a> functions.</li>
|
||||
<li><strong>Face Textures</strong>: In more complex cases such as ShapeNet meshes, there are multiple texture maps per mesh and some faces have texture while other do not. For these cases, a more flexible representation is a texture atlas, where each face is represented as an <code>(RxR)</code> texture map where R is the texture resolution. For a given point on the face, the texture value can be sampled from the per face texture map using the barycentric coordinates of the point. This representation requires one tensor of shape <code>(N, F, R, R, 3)</code>. This texturing method is inspired by the SoftRasterizer implementation. For more details refer to the <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/io/mtl_io.py#L123"><code>make_material_atlas</code></a> and <a href="https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/renderer/mesh/textures.py#L452"><code>sample_textures</code></a> functions. <strong>NOTE:</strong>: The <code>TexturesAtlas</code> texture sampling is only differentiable with respect to the texture atlas but not differentiable with respect to the barycentric coordinates.</li>
|
||||
</ol>
|
||||
<p><img src="assets/texturing.jpg" width="1000"></p>
|
||||
<hr>
|
||||
<h3><a class="anchor" aria-hidden="true" id="a-simple-renderer"></a><a href="#a-simple-renderer" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>A simple renderer</h3>
|
||||
<p>A renderer in PyTorch3D is composed of a <strong>rasterizer</strong> and a <strong>shader</strong>. Create a renderer in a few simple steps:</p>
|
||||
<pre><code class="hljs"><span class="hljs-comment"># Imports</span>
|
||||
@@ -129,13 +138,14 @@ raster_settings = RasterizationSettings(
|
||||
<span class="hljs-attribute">faces_per_pixel</span>=1,
|
||||
)
|
||||
|
||||
<span class="hljs-comment"># Create a phong renderer by composing a rasterizer and a shader. Here we can use a predefined</span>
|
||||
<span class="hljs-comment"># Create a Phong renderer by composing a rasterizer and a shader. Here we can use a predefined</span>
|
||||
<span class="hljs-comment"># PhongShader, passing in the device on which to initialize the default parameters</span>
|
||||
renderer = MeshRenderer(
|
||||
<span class="hljs-attribute">rasterizer</span>=MeshRasterizer(cameras=cameras, <span class="hljs-attribute">raster_settings</span>=raster_settings),
|
||||
<span class="hljs-attribute">shader</span>=HardPhongShader(device=device, <span class="hljs-attribute">cameras</span>=cameras)
|
||||
)
|
||||
</code></pre>
|
||||
<hr>
|
||||
<h3><a class="anchor" aria-hidden="true" id="a-custom-shader"></a><a href="#a-custom-shader" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>A custom shader</h3>
|
||||
<p>Shaders are the most flexible part of the PyTorch3D rendering API. We have created some examples of shaders in <code>shaders.py</code> but this is a non exhaustive set.</p>
|
||||
<p>A shader can incorporate several steps:</p>
|
||||
@@ -158,4 +168,4 @@ renderer = MeshRenderer(
|
||||
<tr><td>SoftSilhouetteShader</td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center">✔️</td></tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Christoph Lassner</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/renderer"><span class="arrow-prev">← </span><span>Overview</span></a><a class="docs-next button" href="/docs/cameras"><span>Cameras</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Jeremy Reizenstein</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/renderer"><span class="arrow-prev">← </span><span>Overview</span></a><a class="docs-next button" href="/docs/cameras"><span>Cameras</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>visualization · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Overview"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="visualization · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Overview"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>visualization · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Overview"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="visualization · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Overview"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -75,4 +75,4 @@
|
||||
<pre><code class="hljs"><span class="hljs-attribute">fig</span> = ...
|
||||
fig.write_image(<span class="hljs-string">"image_name.png"</span>)
|
||||
</code></pre>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Amitav Baruah</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/cubify"><span class="arrow-prev">← </span><span>Cubify</span></a><a class="docs-next button" href="/docs/renderer"><span>Overview</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Amitav Baruah</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/cubify"><span class="arrow-prev">← </span><span>Cubify</span></a><a class="docs-next button" href="/docs/renderer"><span>Overview</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>visualization · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Overview"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="visualization · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Overview"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>visualization · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Overview"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="visualization · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Overview"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -75,4 +75,4 @@
|
||||
<pre><code class="hljs"><span class="hljs-attribute">fig</span> = ...
|
||||
fig.write_image(<span class="hljs-string">"image_name.png"</span>)
|
||||
</code></pre>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Amitav Baruah</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/cubify"><span class="arrow-prev">← </span><span>Cubify</span></a><a class="docs-next button" href="/docs/renderer"><span>Overview</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Amitav Baruah</em></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/cubify"><span class="arrow-prev">← </span><span>Cubify</span></a><a class="docs-next button" href="/docs/renderer"><span>Overview</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>why_pytorch3d · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Why PyTorch3D"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="why_pytorch3d · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Why PyTorch3D"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>why_pytorch3d · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Why PyTorch3D"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="why_pytorch3d · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Why PyTorch3D"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -65,4 +65,4 @@
|
||||
</script></nav></div><div class="container mainContainer docsContainer"><div class="wrapper"><div class="post"><header class="postHeader"></header><article><div><span><h1><a class="anchor" aria-hidden="true" id="why-pytorch3d"></a><a href="#why-pytorch3d" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Why PyTorch3D</h1>
|
||||
<p>Our goal with PyTorch3D is to help accelerate research at the intersection of deep learning and 3D. 3D data is more complex than 2D images and while working on projects such as <a href="https://github.com/facebookresearch/meshrcnn">Mesh R-CNN</a> and <a href="https://github.com/facebookresearch/c3dpo_nrsfm">C3DPO</a>, we encountered several challenges including 3D data representation, batching, and speed. We have developed many useful operators and abstractions for working on 3D deep learning and want to share this with the community to drive novel research in this area.</p>
|
||||
<p>In PyTorch3D we have included efficient 3D operators, heterogeneous batching capabilities, and a modular differentiable rendering API, to equip researchers in this field with a much needed toolkit to implement cutting-edge research with complex 3D inputs.</p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Patrick Labatut</em></div><div class="docs-prevnext"><a class="docs-next button" href="/docs/meshes_io"><span>Loading from file</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Patrick Labatut</em></div><div class="docs-prevnext"><a class="docs-next button" href="/docs/meshes_io"><span>Loading from file</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
@@ -1,4 +1,4 @@
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>why_pytorch3d · PyTorch3D</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Why PyTorch3D"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="why_pytorch3d · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Why PyTorch3D"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>why_pytorch3d · PyTorch3D</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/><meta name="generator" content="Docusaurus"/><meta name="description" content="# Why PyTorch3D"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="why_pytorch3d · PyTorch3D"/><meta property="og:type" content="website"/><meta property="og:url" content="https://pytorch3d.org/"/><meta property="og:description" content="# Why PyTorch3D"/><meta property="og:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><meta name="twitter:card" content="summary"/><meta name="twitter:image" content="https://pytorch3d.org/img/pytorch3dlogoicon.svg"/><link rel="shortcut icon" href="/img/pytorch3dfavicon.png"/><link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"/><script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
@@ -65,4 +65,4 @@
|
||||
</script></nav></div><div class="container mainContainer docsContainer"><div class="wrapper"><div class="post"><header class="postHeader"></header><article><div><span><h1><a class="anchor" aria-hidden="true" id="why-pytorch3d"></a><a href="#why-pytorch3d" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg></a>Why PyTorch3D</h1>
|
||||
<p>Our goal with PyTorch3D is to help accelerate research at the intersection of deep learning and 3D. 3D data is more complex than 2D images and while working on projects such as <a href="https://github.com/facebookresearch/meshrcnn">Mesh R-CNN</a> and <a href="https://github.com/facebookresearch/c3dpo_nrsfm">C3DPO</a>, we encountered several challenges including 3D data representation, batching, and speed. We have developed many useful operators and abstractions for working on 3D deep learning and want to share this with the community to drive novel research in this area.</p>
|
||||
<p>In PyTorch3D we have included efficient 3D operators, heterogeneous batching capabilities, and a modular differentiable rendering API, to equip researchers in this field with a much needed toolkit to implement cutting-edge research with complex 3D inputs.</p>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Patrick Labatut</em></div><div class="docs-prevnext"><a class="docs-next button" href="/docs/meshes_io"><span>Loading from file</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2020 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
</span></div></article></div><div class="docLastUpdate"><em>Last updated by Patrick Labatut</em></div><div class="docs-prevnext"><a class="docs-next button" href="/docs/meshes_io"><span>Loading from file</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer" id="footer"><section class="sitemap"><div class="footerSection"><div class="social"><a class="github-button" href="https://github.com/facebookresearch/pytorch3d" data-count-href="https://github.com/facebookresearch/pytorch3d/stargazers" data-show-count="true" data-count-aria-label="# stargazers on GitHub" aria-label="Star PyTorch3D on GitHub">pytorch3d</a></div></div></section><a href="https://opensource.facebook.com/" target="_blank" rel="noreferrer noopener" class="fbOpenSource"><img src="/img/oss_logo.png" alt="Facebook Open Source" width="170" height="45"/></a><section class="copyright">Copyright © 2021 Facebook Inc<br/>Legal:<a href="https://opensource.facebook.com/legal/privacy/" target="_blank" rel="noreferrer noopener">Privacy</a><a href="https://opensource.facebook.com/legal/terms/" target="_blank" rel="noreferrer noopener">Terms</a></section></footer></div></body></html>
|
||||
Reference in New Issue
Block a user