71 Commits

Author SHA1 Message Date
bottler
9c586b1351 Run tests in github action not circleci (#1896)
Summary: Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1896

Differential Revision: D65272512

Pulled By: bottler
2024-10-31 08:41:20 -07:00
Richard Barnes
e13848265d at::optional -> std::optional (#1170)
Summary: Pull Request resolved: https://github.com/pytorch/ao/pull/1170

Reviewed By: gineshidalgo99

Differential Revision: D64938040

fbshipit-source-id: 57f98b90676ad0164a6975ea50e4414fd85ae6c4
2024-10-25 06:37:57 -07:00
generatedunixname89002005307016
58566963d6 Add type error suppressions for upcoming upgrade
Reviewed By: MaggieMoss

Differential Revision: D64502797

fbshipit-source-id: cee9a54dfa8a005d5912b895d0bd094f352c5c6f
2024-10-16 19:22:01 -07:00
Suresh Babu Kolla
e17ed5cd50 Hipify Pulsar for PyTorch3D
Summary:
- Hipified Pytorch Pulsar
   - Created separate target for Pulsar tests and enabled RE testing
   - Pytorch3D full test suite requires additional work like fixing EGL
     dependencies on AMD

Reviewed By: danzimm

Differential Revision: D61339912

fbshipit-source-id: 0d10bc966e4de4a959f3834a386bad24e449dc1f
2024-10-09 14:38:42 -07:00
Richard Barnes
8ed0c7a002 c10::optional -> std::optional
Summary: `c10::optional` is an alias for `std::optional`. Let's remove the alias and use the real thing.

Reviewed By: meyering

Differential Revision: D63402341

fbshipit-source-id: 241383e7ca4b2f3f1f9cac3af083056123dfd02b
2024-10-03 14:38:37 -07:00
Richard Barnes
2da913c7e6 c10::optional -> std::optional
Summary: `c10::optional` is an alias for `std::optional`. Let's remove the alias and use the real thing.

Reviewed By: palmje

Differential Revision: D63409387

fbshipit-source-id: fb6db59a14db9e897e2e6b6ad378f33bf2af86e8
2024-10-02 11:09:29 -07:00
generatedunixname89002005307016
fca83e6369 Convert .pyre_configuration.local to fast by default architecture] [batch:23/263] [shard:3/N] [A]
Reviewed By: connernilsen

Differential Revision: D63415925

fbshipit-source-id: c3e28405c70f9edcf8c21457ac4faf7315b07322
2024-09-25 17:34:03 -07:00
Jeremy Reizenstein
75ebeeaea0 update version to 0.7.8
Summary: as title

Reviewed By: das-intensity

Differential Revision: D62588556

fbshipit-source-id: 55bae19dd1df796e83179cd29d805fcd871b6d23
2024-09-13 02:31:49 -07:00
Jeremy Reizenstein
ab793177c6 remove pytorch2.0 builds
Summary: these are failing in ci

Reviewed By: das-intensity

Differential Revision: D62594666

fbshipit-source-id: 5e3a7441be2978803dc2d3e361365e0fffa7ad3b
2024-09-13 02:07:25 -07:00
Jeremy Reizenstein
9acdd67b83 fix obj material indexing bug #1368
Summary:
Make the negative index actually not an error

fixes https://github.com/facebookresearch/pytorch3d/issues/1368

Reviewed By: das-intensity

Differential Revision: D62177991

fbshipit-source-id: e5ed433bde1f54251c4d4b6db073c029cbe87343
2024-09-13 02:00:49 -07:00
Nicholas Dahm
3f428d9981 pytorch 2.4.0 + 2.4.1
Summary:
Apparently pytorch 2.4 is now supported as per [this closed issue](https://github.com/facebookresearch/pytorch3d/issues/1863).

Added the `2.4.0` & `2.4.1` versions to `regenerate.py` then ran that as per the `README_fb.md` which generated `config.yml` changes.

Reviewed By: bottler

Differential Revision: D62517831

fbshipit-source-id: 002e276dfe2fa078136ff2f6c747d937abbadd1a
2024-09-11 15:09:43 -07:00
Josh Fromm
05cbea115a Hipify Pytorch3D (#1851)
Summary:
X-link: https://github.com/pytorch/pytorch/pull/133343

X-link: https://github.com/fairinternal/pytorch3d/pull/45

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1851

Enables pytorch3d to build on AMD. An important part of enabling this was not compiling the Pulsar backend when the target is AMD. There are simply too many kernel incompatibilites to make it work (I tried haha). Fortunately, it doesnt seem like most modern applications of pytorch3d rely on Pulsar. We should be able to unlock most of pytorch3d's goodness on AMD without it.

Reviewed By: bottler, houseroad

Differential Revision: D61171993

fbshipit-source-id: fd4aee378a3568b22676c5bf2b727c135ff710af
2024-08-15 16:18:22 -07:00
generatedunixname89002005307016
38afdcfc68 upgrade pyre version in fbcode/vision - batch 2
Reviewed By: bottler

Differential Revision: D60992234

fbshipit-source-id: 899db6ed590ef966ff651c11027819e59b8401a3
2024-08-09 02:07:45 -07:00
Christine Sun
1e0b1d9c72 Remove Python versions from Install.md
Summary: To avoid the installation instructions for PyTorch3D becoming out-of-date, instead of specifying certain Python versions, update to just `Python`. Reader will understand it has to be a Python version compatible with GitHub.

Reviewed By: bottler

Differential Revision: D60919848

fbshipit-source-id: 5e974970a0db3d3d32fae44e5dd30cbc1ce237a9
2024-08-07 13:46:31 -07:00
Rebecca Chen (Python)
44702fdb4b Add "max" point reduction for chamfer distance
Summary:
* Adds a "max" option for the point_reduction input to the
  chamfer_distance function.
* When combining the x and y directions, maxes the losses instead
  of summing them when point_reduction="max".
* Moves batch reduction to happen after the directions are
  combined.
* Adds test_chamfer_point_reduction_max and
  test_single_directional_chamfer_point_reduction_max tests.

Fixes  https://github.com/facebookresearch/pytorch3d/issues/1838

Reviewed By: bottler

Differential Revision: D60614661

fbshipit-source-id: 7879816acfda03e945bada951b931d2c522756eb
2024-08-02 10:46:07 -07:00
Jeremy Reizenstein
7edaee71a9 allow matrix_to_quaternion onnx export
Summary: Attempt to allow torch.onnx.dynamo_export(matrix_to_quaternion) to work.

Differential Revision: D59812279

fbshipit-source-id: 4497e5b543bec9d5c2bdccfb779d154750a075ad
2024-07-16 11:30:20 -07:00
Roman Shapovalov
d0d0e02007 Fix: setting FrameData.crop_bbox_xywh for backwards compatibility
Summary: This diff is fixing a backwards compatibility issue in PyTorch3D's dataset API. The code ensures that the `crop_bbox_xywh` attribute is set when box_crop flag is on. This is an implementation detail that people should not really use, however some people depend on this behaviour.

Reviewed By: bottler

Differential Revision: D59777449

fbshipit-source-id: b875e9eb909038b8629ccdade87661bb2c39d529
2024-07-16 02:21:13 -07:00
Jeremy Reizenstein
4df110b0a9 remove fvcore dependency
Summary: This is not actually needed and is causing a conda-forge confusion to do with python_abi - which needs users to have `-c conda-forge` when they install pytorch3d.

Reviewed By: patricklabatut

Differential Revision: D59587930

fbshipit-source-id: 961ae13a62e1b2b2ce6d8781db38bd97eca69e65
2024-07-11 04:35:38 -07:00
Huy Do
51fd114d8b Forward fix internal pyre failure from D58983461
Summary:
X-link: https://github.com/pytorch/pytorch/pull/129525

Somehow, using underscore alias of some builtin types breaks pyre

Reviewed By: malfet, clee2000

Differential Revision: D59029768

fbshipit-source-id: cfa2171b66475727b9545355e57a8297c1dc0bc6
2024-06-27 07:35:18 -07:00
Jeremy Reizenstein
89653419d0 version 0.7.7
Summary: New version

Reviewed By: MichaelRamamonjisoa

Differential Revision: D58668979

fbshipit-source-id: 195eaf83e4da51a106ef72e38dbb98c51c51724c
2024-06-25 06:59:24 -07:00
Jeremy Reizenstein
7980854d44 require pytorch 2.0+
Summary: Problems with timeouts on old builds.

Reviewed By: MichaelRamamonjisoa

Differential Revision: D58819435

fbshipit-source-id: e1976534a102ad3841f3b297c772e916aeea12cb
2024-06-21 08:15:17 -07:00
Jeremy Reizenstein
51d7c06ddd MKL version fix in CI (#1820)
Summary:
Fix for "undefined symbol: iJIT_NotifyEvent" build issue,

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1820

Reviewed By: MichaelRamamonjisoa

Differential Revision: D58685326

fbshipit-source-id: 48b54367c00851cc6fbb111ca98d69a2ace8361b
2024-06-21 08:15:17 -07:00
Sergii Dymchenko
00c36ec01c Update deprecated PyTorch functions in fbcode/vision
Reviewed By: bottler

Differential Revision: D58762015

fbshipit-source-id: a0d05fe63a88d33e3f7783b5a7b2a476dd3a7449
2024-06-20 14:06:28 -07:00
vedrenne
b0462d8079 Allow indexing for classes inheriting Transform3d (#1801)
Summary:
Currently, it is not possible to access a sub-transform using an indexer for all 3d transforms inheriting the `Transforms3d` class.
For instance:

```python
from pytorch3d import transforms

N = 10
r = transforms.random_rotations(N)
T = transforms.Transform3d().rotate(R=r)
R = transforms.Rotate(r)

x = T[0]  # ok
x = R[0]  # TypeError: __init__() got an unexpected keyword argument 'matrix'
```

This is because all these classes (namely `Rotate`, `Translate`, `Scale`, `RotateAxisAngle`) inherit the `__getitem__()` method from `Transform3d` which has the [following code on line 201](https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/transforms/transform3d.py#L201):

```python
return self.__class__(matrix=self.get_matrix()[index])
```

The four classes inheriting `Transform3d` are not initialized through a matrix argument, hence they error.
I propose to modify the `__getitem__()` method of the `Transform3d` class to fix this behavior. The least invasive way to do it I can think of consists of creating an empty instance of the current class, then setting the `_matrix` attribute manually. Thus, instead of
```python
return self.__class__(matrix=self.get_matrix()[index])
```
I propose to do:
```python
instance = self.__class__.__new__(self.__class__)
instance._matrix = self.get_matrix()[index]
return instance
```

As far as I can tell, this modification occurs no modification whatsoever for the user, except for the ability to index all 3d transforms.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1801

Reviewed By: MichaelRamamonjisoa

Differential Revision: D58410389

Pulled By: bottler

fbshipit-source-id: f371e4c63d2ae4c927a7ad48c2de8862761078de
2024-06-17 07:48:18 -07:00
Jeremy Reizenstein
b66d17a324 Undo c10=>std optional rename
Summary: Undoes the pytorch3d changes in D57294278 because they break builds for for PyTorch<2.1 .

Reviewed By: MichaelRamamonjisoa

Differential Revision: D57379779

fbshipit-source-id: 47a12511abcec4c3f4e2f62eff5ba99deb2fab4c
2024-06-17 07:09:30 -07:00
Kyle Vedder
717493cb79 Fixed last dimension size check so that it doesn't trivially pass. (#1815)
Summary:
Currently, it checks that the `2`th dimension of `p2` is the same size as the `2`th dimension of `p2` instead of `p1`.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1815

Reviewed By: MichaelRamamonjisoa

Differential Revision: D58586966

Pulled By: bottler

fbshipit-source-id: d4f723fa264f90fe368c10825c1acdfdc4c406dc
2024-06-17 06:00:13 -07:00
Jeremy Reizenstein
302da69461 builds for PyTorch 2.2.1 2.2.2 2.3.0 2.3.1
Summary: Build for new pytorch versions

Reviewed By: MichaelRamamonjisoa

Differential Revision: D58668956

fbshipit-source-id: 7fdfb377b370448d6147daded6a21b8db87586fb
2024-06-17 05:57:59 -07:00
Roman Shapovalov
4ae25bfce7 Moving ray bundle to float dtype
Summary: We can now move ray bundle to float dtype (e.g. from fp16 like types).

Reviewed By: bottler

Differential Revision: D57493109

fbshipit-source-id: 4e18a427e968b646fe5feafbff653811cd007981
2024-05-30 10:06:38 -07:00
Richard Barnes
bd52f4a408 c10::optional -> std::optional in tensorboard/adhoc/Adhoc.h +9
Summary: `c10::optional` was switched to be `std::optional` after PyTorch moved to C++17. Let's eliminate `c10::optional`, if we can.

Reviewed By: albanD

Differential Revision: D57294278

fbshipit-source-id: f6f26133c43f8d92a4588f59df7d689e7909a0cd
2024-05-13 16:40:34 -07:00
generatedunixname89002005307016
17117106e4 upgrade pyre version in fbcode/vision - batch 2
Differential Revision: D57183103

fbshipit-source-id: 7e2f42ddc6a1fa02abc27a451987d67a00264cbb
2024-05-10 01:18:43 -07:00
Richard Barnes
aec76bb4c8 Remove unused-but-set variables in vision/fair/pytorch3d/pytorch3d/csrc/pulsar/include/renderer.render.device.h +1
Summary:
This diff removes a variable that was set, but which was not used.

LLVM-15 has a warning `-Wunused-but-set-variable` which we treat as an error because it's so often diagnostic of a code issue. Unused but set variables often indicate a programming mistake, but can also just be unnecessary cruft that harms readability and performance.

Removing this variable will not change how your code works, but the unused variable may indicate your code isn't working the way you thought it was. I've gone through each of these by hand, but mistakes may have slipped through. If you feel the diff needs changes before landing, **please commandeer** and make appropriate changes: there are hundreds of these and responding to them individually is challenging.

For questions/comments, contact r-barnes.

 - If you approve of this diff, please use the "Accept & Ship" button :-)

Reviewed By: bottler

Differential Revision: D56886956

fbshipit-source-id: 0c515ed98b812b1c106a59e19ec90751ce32e8c0
2024-05-02 13:58:05 -07:00
Andres Suarez
47d5dc8824 Apply clang-format 18
Summary: Previously this code conformed from clang-format 12.

Reviewed By: igorsugak

Differential Revision: D56065247

fbshipit-source-id: f5a985dd8f8b84f2f9e1818b3719b43c5a1b05b3
2024-04-14 11:28:32 -07:00
generatedunixname89002005307016
fe0b1bae49 upgrade pyre version in fbcode/vision - batch 2
Differential Revision: D55650177

fbshipit-source-id: d5faa4d805bb40fe3dea70b0601e7a1382b09f3a
2024-04-02 18:11:50 -07:00
Ruishen Lyu
ccf22911d4 Optimize list_to_packed to avoid for loop (#1737)
Summary:
For larger N and Mi value (e.g. N=154, Mi=238) I notice list_to_packed() has become a bottleneck for my application. By removing the for loop and running on GPU, i see a 10-20 x speedup.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1737

Reviewed By: MichaelRamamonjisoa

Differential Revision: D54187993

Pulled By: bottler

fbshipit-source-id: 16399a24cb63b48c30460c7d960abef603b115d0
2024-04-02 07:50:25 -07:00
Ashim Dahal
128be02fc0 feat: adjusted sample_nums (#1768)
Summary:
adjusted sample_nums to match the number of columns in the image grid. It originally produced image grid with 5 axes but only 3 images and after this fix, the block would work as intended.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1768

Reviewed By: MichaelRamamonjisoa

Differential Revision: D55632872

Pulled By: bottler

fbshipit-source-id: 44d633a8068076889e49d49b8a7910dba0db37a7
2024-04-02 06:02:48 -07:00
Roeia Kishk
31e3488a51 Changed tutorials' pip searching
Summary:
### Generalise tutorials' pip searching:
## Required Information:
This diff contains changes to several PyTorch3D tutorials.

**Purpose of this diff:**
Replace the current installation code with a more streamlined approach that tries to install the wheel first and falls back to installing from source if the wheel is not found.

**Why this diff is required:**
This diff makes it easier to cope with new PyTorch releases and reduce the need for manual intervention, as the current process involves checking the version of PyTorch in Colab and building a new wheel if it doesn't match the expected version, which generates additional work each time there is a a new PyTorch version in Colab.

**Changes:**
Before:
```
    if torch.__version__.startswith("2.1.") and sys.platform.startswith("linux"):
        # We try to install PyTorch3D via a released wheel.
        pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
        version_str="".join([
            f"py3{sys.version_info.minor}_cu",
            torch.version.cuda.replace(".",""),
            f"_pyt{pyt_version_str}"
        ])
        !pip install fvcore iopath
        !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
    else:
        # We try to install PyTorch3D from source.
        !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
```
After:
```
    pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
    version_str="".join([
        f"py3{sys.version_info.minor}_cu",
        torch.version.cuda.replace(".",""),
        f"_pyt{pyt_version_str}"
    ])
    !pip install fvcore iopath
    if sys.platform.startswith("linux"):
      # We try to install PyTorch3D via a released wheel.
      !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
      pip_list = !pip freeze
      need_pytorch3d = not any(i.startswith("pytorch3d==") for  i in pip_list)

    if need_pytorch3d:
        # We try to install PyTorch3D from source.
        !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
```

Reviewed By: bottler

Differential Revision: D55431832

fbshipit-source-id: a8de9162470698320241ae8401427dcb1ce17c37
2024-03-28 11:24:43 -07:00
generatedunixname89002005307016
b215776f2d upgrade pyre version in fbcode/vision - batch 2
Differential Revision: D55395614

fbshipit-source-id: 71677892b5d6f219f6df25b4efb51fb0f6b1441b
2024-03-26 22:02:22 -07:00
Cijo Jose
38cf0dc1c5 TexturesUV multiple maps
Summary: Implements the  the TexturesUV with multiple map ids.

Reviewed By: bottler

Differential Revision: D53944063

fbshipit-source-id: 06c25eb6d69f72db0484f16566dd2ca32a560b82
2024-03-12 06:59:31 -07:00
Jaap Suter
7566530669 CUDA marching_cubes fix
Summary:
Fix an inclusive vs exclusive scan mix-up that was accidentally introduced when removing the Thrust dependency (`Thrust::exclusive_scan`) and reimplementing it using `at::cumsum` (which does an inclusive scan).

This fixes two Github reported issues:

 * https://github.com/facebookresearch/pytorch3d/issues/1731
 * https://github.com/facebookresearch/pytorch3d/issues/1751

Reviewed By: bottler

Differential Revision: D54605545

fbshipit-source-id: da9e92f3f8a9a35f7b7191428d0b9a9ca03e0d4d
2024-03-07 15:38:24 -08:00
Conner Nilsen
a27755db41 Pyre Configurationless migration for] [batch:85/112] [shard:6/N]
Reviewed By: inseokhwang

Differential Revision: D54438157

fbshipit-source-id: a6acfe146ed29fff82123b5e458906d4b4cee6a2
2024-03-04 18:30:37 -08:00
Amethyst Reese
3da7703c5a apply Black 2024 style in fbcode (4/16)
Summary:
Formats the covered files with pyfmt.

paintitblack

Reviewed By: aleivag

Differential Revision: D54447727

fbshipit-source-id: 8844b1caa08de94d04ac4df3c768dbf8c865fd2f
2024-03-02 17:31:19 -08:00
Jeremy Reizenstein
f34104cf6e version 0.7.6
Summary: New version

Reviewed By: cijose

Differential Revision: D53852987

fbshipit-source-id: 962ab9f61153883df9da0601356bd6b108dc5df7
2024-02-19 03:28:54 -08:00
Jeremy Reizenstein
f247c86dc0 Update tutorials for 0.7.6
Summary:
version number changed with
`sed -i "s/2.1\./2.2./" *b`

Reviewed By: cijose

Differential Revision: D53852986

fbshipit-source-id: 1662c8e6d671321887a3263bc3880d5c33d1f866
2024-02-19 03:28:54 -08:00
Cijo Jose
ae9d8787ce Support color in cubify
Summary: The diff support colors in cubify for align = "center"

Reviewed By: bottler

Differential Revision: D53777011

fbshipit-source-id: ccb2bd1e3d89be3d1ac943eff08f40e50b0540d9
2024-02-16 08:19:12 -08:00
Jeremy Reizenstein
8772fe0de8 Make OpenGL optional in tests
Summary: Add an option to run tests without the OpenGL Renderer.

Reviewed By: patricklabatut

Differential Revision: D53573400

fbshipit-source-id: 54a14e7b2f156d24e0c561fdb279f4a9af01b793
2024-02-13 07:43:42 -08:00
Ada Martin
c292c71c1a c++ marching cubes fix
Summary:
Fixes https://github.com/facebookresearch/pytorch3d/issues/1641. The bug was caused by the mistaken downcasting of an int64_t into int, causing issues only on inputs large enough to have hashes that escaped the bounds of an int32.

Also added a test case for this issue.

Reviewed By: bottler

Differential Revision: D53505370

fbshipit-source-id: 0fdd0efc6d259cc3b0263e7ff3a4ab2c648ec521
2024-02-08 11:13:15 -08:00
Jeremy Reizenstein
d0d9cae9cd builds for PyTorch 2.1.1 2.1.2 2.2.0
Summary: Build for new pytorch versions

Reviewed By: shapovalov

Differential Revision: D53266104

fbshipit-source-id: f7aaacaf39cab3839b24f45361c36f087d0ea7c9
2024-02-07 11:56:52 -08:00
generatedunixname89002005287564
1f92c4e9d2 vision/fair
Reviewed By: zsol

Differential Revision: D53258682

fbshipit-source-id: 3f006b5f31a2b1ffdc6323d3a3b08ac46c3162ce
2024-01-31 07:43:49 -08:00
generatedunixname89002005307016
9b981f2c7e suppress errors in vision/fair/pytorch3d
Differential Revision: D53152021

fbshipit-source-id: 78be99b00abe4d992db844ff5877a89d42d468af
2024-01-26 19:10:37 -08:00
generatedunixname89002005307016
85eccbbf77 suppress errors in vision/fair/pytorch3d
Differential Revision: D53111480

fbshipit-source-id: 0f506bf29cf908e40b058ae72f51e828cd597825
2024-01-25 21:13:30 -08:00
generatedunixname89002005307016
b80ab0caf0 upgrade pyre version in fbcode/vision - batch 1
Differential Revision: D53059851

fbshipit-source-id: f5d0951186c858f90ddf550323a163e4b6d42b68
2024-01-24 23:56:06 -08:00
Dimitris Prountzos
1e817914b3 Fix compiler warning in knn.ku
Summary: This change updates the type of p2_idx from size_t to int64_t to address compiler warnings related to signed/unsigned comparison.

Reviewed By: bottler

Differential Revision: D52879393

fbshipit-source-id: de5484d78a907fccdaae3ce036b5e4a1a0a4de70
2024-01-18 12:27:16 -08:00
Ido Zachevsky
799c1cd21b Allow get_rgbd_point_cloud to take any #channels
Summary: Fixed `get_rgbd_point_cloud` to take any number of image input channels.

Reviewed By: bottler

Differential Revision: D52796276

fbshipit-source-id: 3ddc0d1e337a6cc53fc86c40a6ddb136f036f9bc
2024-01-16 03:38:26 -08:00
Abdelrahman Selim
292acc71a3 Update so3 operations for numerical stability
Summary: Replace implementations of `so3_exp_map` and `so3_log_map` in so3.py with existing more-stable implementations.

Reviewed By: bottler

Differential Revision: D52513319

fbshipit-source-id: fbfc039643fef284d8baa11bab61651964077afe
2024-01-04 02:26:56 -08:00
Jeremy Reizenstein
3621a36494 mac build fix
Summary: Fix for https://github.com/facebookresearch/pytorch3d/issues/1708

Reviewed By: patricklabatut

Differential Revision: D52480756

fbshipit-source-id: 530c0f9413970fba042eec354e28318c96e35f42
2024-01-03 07:46:54 -08:00
Abdelrahman Selim
3087ab7f62 Standardize matrix_to_quaternion output
Summary:
An OSS user has pointed out in https://github.com/facebookresearch/pytorch3d/issues/1703 that the output of matrix_to_quaternion (in that file) can be non standardized.

This diff solves the issue by adding a line of standardize at the end of the function

Reviewed By: bottler

Differential Revision: D52368721

fbshipit-source-id: c8d0426307fcdb7fd165e032572382d5ae360cde
2023-12-21 13:43:29 -08:00
Tony Tan
e46ab49a34 Submeshing TexturesAtlas for PyTorch3D 3D Rendering
Summary: Implement submeshing for TexturesAtlas and add associated test

Reviewed By: bottler

Differential Revision: D52334053

fbshipit-source-id: d54080e9af1f0c01551702736e858e3bd439ac58
2023-12-21 11:08:01 -08:00
Hassan Lotfi
8a27590c5f Submeshing TexturesUV
Summary: Implement `submeshes` for TexturesUV. Fix what Meshes.submeshes passes to the texture's submeshes function to make this possible.

Reviewed By: bottler

Differential Revision: D52192060

fbshipit-source-id: 526734962e3376aaf75654200164cdcebfff6997
2023-12-19 06:48:06 -08:00
Eric Young
06cdc313a7 PyTorch3D - Avoid flip in TexturesAtlas
Summary: Performance improvement: Use torch.lerp to map uv coordinates to the range needed for grid_sample (i.e. map [0, 1] to [-1, 1] and invert the y-axis)

Reviewed By: bottler

Differential Revision: D51961728

fbshipit-source-id: db19a5e3f482e9af7b96b20f88a1e5d0076dac43
2023-12-11 12:49:17 -08:00
Roman Shapovalov
94da8841af Align_corners switch in Volumes
Summary:
Porting this commit by davnov134 .
93a3a62800 (diff-a8e107ebe039de52ca112ac6ddfba6ebccd53b4f53030b986e13f019fe57a378)

Capability to interpret world/local coordinates with various align_corners semantics.

Reviewed By: bottler

Differential Revision: D51855420

fbshipit-source-id: 834cd220c25d7f0143d8a55ba880da5977099dd6
2023-12-07 03:07:41 -08:00
generatedunixname89002005307016
fbc6725f03 upgrade pyre version in fbcode/vision - batch 2
Differential Revision: D51902460

fbshipit-source-id: 3ffc5d7d2da5c5d4e971ee8275bd999c709e0b12
2023-12-06 20:53:53 -08:00
Jeremy Reizenstein
6b8766080d Use cuda's make_float3 in pulsar
Summary: Fixes github.com/facebookresearch/pytorch3d/issues/1680

Reviewed By: MichaelRamamonjisoa

Differential Revision: D51587889

fbshipit-source-id: e68ae32d7041fb9ea3e981cf2bde47f947a41ca2
2023-12-05 03:15:02 -08:00
sewon.jeon
c373a84400 Use updated naming to remove warning (#1687)
Summary:
diag_suppress is  deprecated from cuda

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1687

Reviewed By: MichaelRamamonjisoa

Differential Revision: D51495875

Pulled By: bottler

fbshipit-source-id: 6543a15e666238365719117bfcf5f7dac532aec1
2023-12-05 03:15:02 -08:00
sewon.jeon
7606854ff7 Fix windows build (#1689)
Summary:
Change the data type usage in the code to ensure cross-platform compatibility
long -> int64_t

<img width="628" alt="image" src="https://github.com/facebookresearch/pytorch3d/assets/6214316/40041f7f-3c09-4571-b9ff-676c625802e9">

Tested under
Win 11 and Ubuntu 22.04
with
CUDA 12.1.1 torch 2.1.1

Related issues & PR

https://github.com/facebookresearch/pytorch3d/pull/9

https://github.com/facebookresearch/pytorch3d/issues/1679

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1689

Reviewed By: MichaelRamamonjisoa

Differential Revision: D51521562

Pulled By: bottler

fbshipit-source-id: d8ea81e223c642e0e9fb283f5d7efc9d6ac00d93
2023-12-05 03:14:06 -08:00
Jeremy Reizenstein
83bacda8fb lint
Summary: Fix recent flake complaints

Reviewed By: MichaelRamamonjisoa

Differential Revision: D51811912

fbshipit-source-id: 65183f5bc7058da910e4d5a63b2250ce8637f1cc
2023-12-04 13:43:34 -08:00
generatedunixname89002005307016
f74fc450e8 suppress errors in vision/fair/pytorch3d
Differential Revision: D51645956

fbshipit-source-id: 1ae7279efa0a27bb9bc5255527bafebb84fdafd0
2023-11-28 19:10:06 -08:00
Dan Johnson
3b4f8a4980 Adding reference and context to PointsRenderer
Summary: User confusion (https://github.com/facebookresearch/pytorch3d/issues/1579) about how zbuf is used for alpha compositing. Added small description and reference to paper to help give some context.

Reviewed By: bottler

Differential Revision: D51374933

fbshipit-source-id: 8c489a5b5d0a81f0d936c1348b9ade6787c39c9a
2023-11-16 08:58:08 -08:00
Aleksandrs Ecins
79b46734cb Fix lint in test_render_points
Summary: Fixes lint in test_render_points in the PyTorch3D library.

Differential Revision: D51289841

fbshipit-source-id: 1eae621eb8e87b0fe5979f35acd878944f574a6a
2023-11-14 11:07:28 -08:00
YangHai
55638f3bae Support reading uv and uv map for ply format if texture_uv exists in ply file (#1100)
Summary:
When the ply format looks as follows:
  ```
comment TextureFile ***.png
element vertex 892
property double x
property double y
property double z
property double nx
property double ny
property double nz
property double texture_u
property double texture_v
```
`MeshPlyFormat` class will read uv from the ply file and read the uv map as commented as TextureFile.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1100

Reviewed By: MichaelRamamonjisoa

Differential Revision: D50885176

Pulled By: bottler

fbshipit-source-id: be75b1ec9a17a1ed87dbcf846a9072ea967aec37
2023-11-14 07:44:14 -08:00
Jeremy Reizenstein
f4f2209271 Fix for mask_points=False
Summary: Remove unused argument `mask_points` from `get_rgbd_point_cloud` and fix `get_implicitron_sequence_pointcloud`, which assumed it was used.

Reviewed By: MichaelRamamonjisoa

Differential Revision: D50885848

fbshipit-source-id: c0b834764ad5ef560107bd8eab04952d000489b8
2023-11-14 07:42:18 -08:00
Jeremy Reizenstein
f613682551 marching_cubes type fix
Summary: fixes https://github.com/facebookresearch/pytorch3d/issues/1679

Reviewed By: MichaelRamamonjisoa

Differential Revision: D50949933

fbshipit-source-id: 5c467de8bf84dd2a3d61748b3846678582d24ea3
2023-11-14 07:38:54 -08:00
295 changed files with 2947 additions and 950 deletions

View File

@@ -162,90 +162,6 @@ workflows:
jobs: jobs:
# - main: # - main:
# context: DOCKERHUB_TOKEN # context: DOCKERHUB_TOKEN
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py38_cu113_pyt1120
python_version: '3.8'
pytorch_version: 1.12.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py38_cu116_pyt1120
python_version: '3.8'
pytorch_version: 1.12.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py38_cu113_pyt1121
python_version: '3.8'
pytorch_version: 1.12.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py38_cu116_pyt1121
python_version: '3.8'
pytorch_version: 1.12.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py38_cu116_pyt1130
python_version: '3.8'
pytorch_version: 1.13.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py38_cu117_pyt1130
python_version: '3.8'
pytorch_version: 1.13.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py38_cu116_pyt1131
python_version: '3.8'
pytorch_version: 1.13.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py38_cu117_pyt1131
python_version: '3.8'
pytorch_version: 1.13.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py38_cu117_pyt200
python_version: '3.8'
pytorch_version: 2.0.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py38_cu118_pyt200
python_version: '3.8'
pytorch_version: 2.0.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py38_cu117_pyt201
python_version: '3.8'
pytorch_version: 2.0.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py38_cu118_pyt201
python_version: '3.8'
pytorch_version: 2.0.1
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118 conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
@@ -261,89 +177,103 @@ workflows:
python_version: '3.8' python_version: '3.8'
pytorch_version: 2.1.0 pytorch_version: 2.1.0
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113 conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu113 cu_version: cu118
name: linux_conda_py39_cu113_pyt1120 name: linux_conda_py38_cu118_pyt211
python_version: '3.9' python_version: '3.8'
pytorch_version: 1.12.0 pytorch_version: 2.1.1
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116 conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu116 cu_version: cu121
name: linux_conda_py39_cu116_pyt1120 name: linux_conda_py38_cu121_pyt211
python_version: '3.9' python_version: '3.8'
pytorch_version: 1.12.0 pytorch_version: 2.1.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py39_cu113_pyt1121
python_version: '3.9'
pytorch_version: 1.12.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py39_cu116_pyt1121
python_version: '3.9'
pytorch_version: 1.12.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py39_cu116_pyt1130
python_version: '3.9'
pytorch_version: 1.13.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py39_cu117_pyt1130
python_version: '3.9'
pytorch_version: 1.13.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py39_cu116_pyt1131
python_version: '3.9'
pytorch_version: 1.13.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py39_cu117_pyt1131
python_version: '3.9'
pytorch_version: 1.13.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py39_cu117_pyt200
python_version: '3.9'
pytorch_version: 2.0.0
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118 conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu118 cu_version: cu118
name: linux_conda_py39_cu118_pyt200 name: linux_conda_py38_cu118_pyt212
python_version: '3.9' python_version: '3.8'
pytorch_version: 2.0.0 pytorch_version: 2.1.2
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117 conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu117 cu_version: cu121
name: linux_conda_py39_cu117_pyt201 name: linux_conda_py38_cu121_pyt212
python_version: '3.9' python_version: '3.8'
pytorch_version: 2.0.1 pytorch_version: 2.1.2
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118 conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu118 cu_version: cu118
name: linux_conda_py39_cu118_pyt201 name: linux_conda_py38_cu118_pyt220
python_version: '3.9' python_version: '3.8'
pytorch_version: 2.0.1 pytorch_version: 2.2.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py38_cu121_pyt220
python_version: '3.8'
pytorch_version: 2.2.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py38_cu118_pyt222
python_version: '3.8'
pytorch_version: 2.2.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py38_cu121_pyt222
python_version: '3.8'
pytorch_version: 2.2.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py38_cu118_pyt231
python_version: '3.8'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py38_cu121_pyt231
python_version: '3.8'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py38_cu118_pyt240
python_version: '3.8'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py38_cu121_pyt240
python_version: '3.8'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py38_cu118_pyt241
python_version: '3.8'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py38_cu121_pyt241
python_version: '3.8'
pytorch_version: 2.4.1
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118 conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
@@ -359,89 +289,103 @@ workflows:
python_version: '3.9' python_version: '3.9'
pytorch_version: 2.1.0 pytorch_version: 2.1.0
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113 conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu113 cu_version: cu118
name: linux_conda_py310_cu113_pyt1120 name: linux_conda_py39_cu118_pyt211
python_version: '3.10' python_version: '3.9'
pytorch_version: 1.12.0 pytorch_version: 2.1.1
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116 conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu116 cu_version: cu121
name: linux_conda_py310_cu116_pyt1120 name: linux_conda_py39_cu121_pyt211
python_version: '3.10' python_version: '3.9'
pytorch_version: 1.12.0 pytorch_version: 2.1.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py310_cu113_pyt1121
python_version: '3.10'
pytorch_version: 1.12.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py310_cu116_pyt1121
python_version: '3.10'
pytorch_version: 1.12.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py310_cu116_pyt1130
python_version: '3.10'
pytorch_version: 1.13.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py310_cu117_pyt1130
python_version: '3.10'
pytorch_version: 1.13.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py310_cu116_pyt1131
python_version: '3.10'
pytorch_version: 1.13.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py310_cu117_pyt1131
python_version: '3.10'
pytorch_version: 1.13.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py310_cu117_pyt200
python_version: '3.10'
pytorch_version: 2.0.0
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118 conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu118 cu_version: cu118
name: linux_conda_py310_cu118_pyt200 name: linux_conda_py39_cu118_pyt212
python_version: '3.10' python_version: '3.9'
pytorch_version: 2.0.0 pytorch_version: 2.1.2
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117 conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu117 cu_version: cu121
name: linux_conda_py310_cu117_pyt201 name: linux_conda_py39_cu121_pyt212
python_version: '3.10' python_version: '3.9'
pytorch_version: 2.0.1 pytorch_version: 2.1.2
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118 conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu118 cu_version: cu118
name: linux_conda_py310_cu118_pyt201 name: linux_conda_py39_cu118_pyt220
python_version: '3.10' python_version: '3.9'
pytorch_version: 2.0.1 pytorch_version: 2.2.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py39_cu121_pyt220
python_version: '3.9'
pytorch_version: 2.2.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py39_cu118_pyt222
python_version: '3.9'
pytorch_version: 2.2.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py39_cu121_pyt222
python_version: '3.9'
pytorch_version: 2.2.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py39_cu118_pyt231
python_version: '3.9'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py39_cu121_pyt231
python_version: '3.9'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py39_cu118_pyt240
python_version: '3.9'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py39_cu121_pyt240
python_version: '3.9'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py39_cu118_pyt241
python_version: '3.9'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py39_cu121_pyt241
python_version: '3.9'
pytorch_version: 2.4.1
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118 conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
@@ -456,6 +400,104 @@ workflows:
name: linux_conda_py310_cu121_pyt210 name: linux_conda_py310_cu121_pyt210
python_version: '3.10' python_version: '3.10'
pytorch_version: 2.1.0 pytorch_version: 2.1.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py310_cu118_pyt211
python_version: '3.10'
pytorch_version: 2.1.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py310_cu121_pyt211
python_version: '3.10'
pytorch_version: 2.1.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py310_cu118_pyt212
python_version: '3.10'
pytorch_version: 2.1.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py310_cu121_pyt212
python_version: '3.10'
pytorch_version: 2.1.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py310_cu118_pyt220
python_version: '3.10'
pytorch_version: 2.2.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py310_cu121_pyt220
python_version: '3.10'
pytorch_version: 2.2.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py310_cu118_pyt222
python_version: '3.10'
pytorch_version: 2.2.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py310_cu121_pyt222
python_version: '3.10'
pytorch_version: 2.2.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py310_cu118_pyt231
python_version: '3.10'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py310_cu121_pyt231
python_version: '3.10'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py310_cu118_pyt240
python_version: '3.10'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py310_cu121_pyt240
python_version: '3.10'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py310_cu118_pyt241
python_version: '3.10'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py310_cu121_pyt241
python_version: '3.10'
pytorch_version: 2.4.1
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118 conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
@@ -470,6 +512,174 @@ workflows:
name: linux_conda_py311_cu121_pyt210 name: linux_conda_py311_cu121_pyt210
python_version: '3.11' python_version: '3.11'
pytorch_version: 2.1.0 pytorch_version: 2.1.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py311_cu118_pyt211
python_version: '3.11'
pytorch_version: 2.1.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py311_cu121_pyt211
python_version: '3.11'
pytorch_version: 2.1.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py311_cu118_pyt212
python_version: '3.11'
pytorch_version: 2.1.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py311_cu121_pyt212
python_version: '3.11'
pytorch_version: 2.1.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py311_cu118_pyt220
python_version: '3.11'
pytorch_version: 2.2.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py311_cu121_pyt220
python_version: '3.11'
pytorch_version: 2.2.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py311_cu118_pyt222
python_version: '3.11'
pytorch_version: 2.2.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py311_cu121_pyt222
python_version: '3.11'
pytorch_version: 2.2.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py311_cu118_pyt231
python_version: '3.11'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py311_cu121_pyt231
python_version: '3.11'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py311_cu118_pyt240
python_version: '3.11'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py311_cu121_pyt240
python_version: '3.11'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py311_cu118_pyt241
python_version: '3.11'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py311_cu121_pyt241
python_version: '3.11'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py312_cu118_pyt220
python_version: '3.12'
pytorch_version: 2.2.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py312_cu121_pyt220
python_version: '3.12'
pytorch_version: 2.2.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py312_cu118_pyt222
python_version: '3.12'
pytorch_version: 2.2.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py312_cu121_pyt222
python_version: '3.12'
pytorch_version: 2.2.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py312_cu118_pyt231
python_version: '3.12'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py312_cu121_pyt231
python_version: '3.12'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py312_cu118_pyt240
python_version: '3.12'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py312_cu121_pyt240
python_version: '3.12'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py312_cu118_pyt241
python_version: '3.12'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py312_cu121_pyt241
python_version: '3.12'
pytorch_version: 2.4.1
- binary_linux_conda_cuda: - binary_linux_conda_cuda:
name: testrun_conda_cuda_py310_cu117_pyt201 name: testrun_conda_cuda_py310_cu117_pyt201
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN

View File

@@ -19,19 +19,18 @@ from packaging import version
# The CUDA versions which have pytorch conda packages available for linux for each # The CUDA versions which have pytorch conda packages available for linux for each
# version of pytorch. # version of pytorch.
CONDA_CUDA_VERSIONS = { CONDA_CUDA_VERSIONS = {
"1.12.0": ["cu113", "cu116"],
"1.12.1": ["cu113", "cu116"],
"1.13.0": ["cu116", "cu117"],
"1.13.1": ["cu116", "cu117"],
"2.0.0": ["cu117", "cu118"],
"2.0.1": ["cu117", "cu118"],
"2.1.0": ["cu118", "cu121"], "2.1.0": ["cu118", "cu121"],
"2.1.1": ["cu118", "cu121"],
"2.1.2": ["cu118", "cu121"],
"2.2.0": ["cu118", "cu121"],
"2.2.2": ["cu118", "cu121"],
"2.3.1": ["cu118", "cu121"],
"2.4.0": ["cu118", "cu121"],
"2.4.1": ["cu118", "cu121"],
} }
def conda_docker_image_for_cuda(cuda_version): def conda_docker_image_for_cuda(cuda_version):
if cuda_version in ("cu101", "cu102", "cu111"):
return None
if len(cuda_version) != 5: if len(cuda_version) != 5:
raise ValueError("Unknown cuda version") raise ValueError("Unknown cuda version")
return "pytorch/conda-builder:cuda" + cuda_version[2:] return "pytorch/conda-builder:cuda" + cuda_version[2:]
@@ -52,12 +51,18 @@ def pytorch_versions_for_python(python_version):
for i in CONDA_CUDA_VERSIONS for i in CONDA_CUDA_VERSIONS
if version.Version(i) >= version.Version("2.1.0") if version.Version(i) >= version.Version("2.1.0")
] ]
if python_version == "3.12":
return [
i
for i in CONDA_CUDA_VERSIONS
if version.Version(i) >= version.Version("2.2.0")
]
def workflows(prefix="", filter_branch=None, upload=False, indentation=6): def workflows(prefix="", filter_branch=None, upload=False, indentation=6):
w = [] w = []
for btype in ["conda"]: for btype in ["conda"]:
for python_version in ["3.8", "3.9", "3.10", "3.11"]: for python_version in ["3.8", "3.9", "3.10", "3.11", "3.12"]:
for pytorch_version in pytorch_versions_for_python(python_version): for pytorch_version in pytorch_versions_for_python(python_version):
for cu_version in CONDA_CUDA_VERSIONS[pytorch_version]: for cu_version in CONDA_CUDA_VERSIONS[pytorch_version]:
w += workflow_pair( w += workflow_pair(

View File

@@ -1,5 +1,8 @@
[flake8] [flake8]
ignore = E203, E266, E501, W503, E221 # B028 No explicit stacklevel argument found.
# B907 'foo' is manually surrounded by quotes, consider using the `!r` conversion flag.
# B905 `zip()` without an explicit `strict=` parameter.
ignore = E203, E266, E501, W503, E221, B028, B905, B907
max-line-length = 88 max-line-length = 88
max-complexity = 18 max-complexity = 18
select = B,C,E,F,W,T4,B9 select = B,C,E,F,W,T4,B9

20
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: facebookresearch/pytorch3d/build_and_test
on:
pull_request:
branches:
- main
jobs:
binary_linux_conda_cuda:
runs-on: 4-core-ubuntu-gpu-t4
env:
PYTHON_VERSION: "3.12"
BUILD_VERSION: "${{ github.run_number }}"
PYTORCH_VERSION: "2.4.1"
CU_VERSION: "cu121"
JUST_TESTRUN: 1
steps:
- uses: actions/checkout@v4
- name: Build and run tests
run: |-
conda create --name env --yes --quiet conda-build
conda run --no-capture-output --name env python3 ./packaging/build_conda.py --use-conda-cuda

View File

@@ -8,11 +8,10 @@
The core library is written in PyTorch. Several components have underlying implementation in CUDA for improved performance. A subset of these components have CPU implementations in C++/PyTorch. It is advised to use PyTorch3D with GPU support in order to use all the features. The core library is written in PyTorch. Several components have underlying implementation in CUDA for improved performance. A subset of these components have CPU implementations in C++/PyTorch. It is advised to use PyTorch3D with GPU support in order to use all the features.
- Linux or macOS or Windows - Linux or macOS or Windows
- Python 3.8, 3.9 or 3.10 - Python
- PyTorch 1.12.0, 1.12.1, 1.13.0, 2.0.0, 2.0.1 or 2.1.0. - PyTorch 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0 or 2.4.1.
- torchvision that matches the PyTorch installation. You can install them together as explained at pytorch.org to make sure of this. - torchvision that matches the PyTorch installation. You can install them together as explained at pytorch.org to make sure of this.
- gcc & g++ ≥ 4.9 - gcc & g++ ≥ 4.9
- [fvcore](https://github.com/facebookresearch/fvcore)
- [ioPath](https://github.com/facebookresearch/iopath) - [ioPath](https://github.com/facebookresearch/iopath)
- If CUDA is to be used, use a version which is supported by the corresponding pytorch version and at least version 9.2. - If CUDA is to be used, use a version which is supported by the corresponding pytorch version and at least version 9.2.
- If CUDA older than 11.7 is to be used and you are building from source, the CUB library must be available. We recommend version 1.10.0. - If CUDA older than 11.7 is to be used and you are building from source, the CUB library must be available. We recommend version 1.10.0.
@@ -22,7 +21,7 @@ The runtime dependencies can be installed by running:
conda create -n pytorch3d python=3.9 conda create -n pytorch3d python=3.9
conda activate pytorch3d conda activate pytorch3d
conda install pytorch=1.13.0 torchvision pytorch-cuda=11.6 -c pytorch -c nvidia conda install pytorch=1.13.0 torchvision pytorch-cuda=11.6 -c pytorch -c nvidia
conda install -c fvcore -c iopath -c conda-forge fvcore iopath conda install -c iopath iopath
``` ```
For the CUB build time dependency, which you only need if you have CUDA older than 11.7, if you are using conda, you can continue with For the CUB build time dependency, which you only need if you have CUDA older than 11.7, if you are using conda, you can continue with
@@ -49,6 +48,7 @@ For developing on top of PyTorch3D or contributing, you will need to run the lin
- tdqm - tdqm
- jupyter - jupyter
- imageio - imageio
- fvcore
- plotly - plotly
- opencv-python - opencv-python
@@ -59,6 +59,7 @@ conda install jupyter
pip install scikit-image matplotlib imageio plotly opencv-python pip install scikit-image matplotlib imageio plotly opencv-python
# Tests/Linting # Tests/Linting
conda install -c fvcore -c conda-forge fvcore
pip install black usort flake8 flake8-bugbear flake8-comprehensions pip install black usort flake8 flake8-bugbear flake8-comprehensions
``` ```
@@ -97,7 +98,7 @@ version_str="".join([
torch.version.cuda.replace(".",""), torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}" f"_pyt{pyt_version_str}"
]) ])
!pip install fvcore iopath !pip install iopath
!pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
``` ```

View File

@@ -146,6 +146,12 @@ If you are using the pulsar backend for sphere-rendering (the `PulsarPointRender
Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under [`Releases`](https://github.com/facebookresearch/pytorch3d/releases), and the builds can be installed using `conda` as per the instructions in [INSTALL.md](INSTALL.md). Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under [`Releases`](https://github.com/facebookresearch/pytorch3d/releases), and the builds can be installed using `conda` as per the instructions in [INSTALL.md](INSTALL.md).
**[Oct 31st 2023]:** PyTorch3D [v0.7.5](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.5) released.
**[May 10th 2023]:** PyTorch3D [v0.7.4](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.4) released.
**[Apr 5th 2023]:** PyTorch3D [v0.7.3](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.3) released.
**[Dec 19th 2022]:** PyTorch3D [v0.7.2](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.2) released. **[Dec 19th 2022]:** PyTorch3D [v0.7.2](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.2) released.
**[Oct 23rd 2022]:** PyTorch3D [v0.7.1](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.1) released. **[Oct 23rd 2022]:** PyTorch3D [v0.7.1](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.1) released.

View File

@@ -23,7 +23,7 @@ conda init bash
source ~/.bashrc source ~/.bashrc
conda create -y -n myenv python=3.8 matplotlib ipython ipywidgets nbconvert conda create -y -n myenv python=3.8 matplotlib ipython ipywidgets nbconvert
conda activate myenv conda activate myenv
conda install -y -c fvcore -c iopath -c conda-forge fvcore iopath conda install -y -c iopath iopath
conda install -y -c pytorch pytorch=1.6.0 cudatoolkit=10.1 torchvision conda install -y -c pytorch pytorch=1.6.0 cudatoolkit=10.1 torchvision
conda install -y -c pytorch3d-nightly pytorch3d conda install -y -c pytorch3d-nightly pytorch3d
pip install plotly scikit-image pip install plotly scikit-image

View File

@@ -5,7 +5,6 @@ sphinx_rtd_theme
sphinx_markdown_tables sphinx_markdown_tables
numpy numpy
iopath iopath
fvcore
https://download.pytorch.org/whl/cpu/torchvision-0.15.2%2Bcpu-cp311-cp311-linux_x86_64.whl https://download.pytorch.org/whl/cpu/torchvision-0.15.2%2Bcpu-cp311-cp311-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torch-2.0.1%2Bcpu-cp311-cp311-linux_x86_64.whl https://download.pytorch.org/whl/cpu/torch-2.0.1%2Bcpu-cp311-cp311-linux_x86_64.whl
omegaconf omegaconf

View File

@@ -83,25 +83,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {

View File

@@ -70,25 +70,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {

View File

@@ -45,25 +45,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {
@@ -405,7 +411,7 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"random_model_images = shapenet_dataset.render(\n", "random_model_images = shapenet_dataset.render(\n",
" sample_nums=[3],\n", " sample_nums=[5],\n",
" device=device,\n", " device=device,\n",
" cameras=cameras,\n", " cameras=cameras,\n",
" raster_settings=raster_settings,\n", " raster_settings=raster_settings,\n",

View File

@@ -84,25 +84,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {

View File

@@ -50,25 +50,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {

View File

@@ -62,25 +62,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {

View File

@@ -41,25 +41,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {

View File

@@ -72,25 +72,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {

View File

@@ -66,25 +66,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {

View File

@@ -44,25 +44,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {

View File

@@ -51,25 +51,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {

View File

@@ -67,25 +67,31 @@
"import os\n", "import os\n",
"import sys\n", "import sys\n",
"import torch\n", "import torch\n",
"import subprocess\n",
"need_pytorch3d=False\n", "need_pytorch3d=False\n",
"try:\n", "try:\n",
" import pytorch3d\n", " import pytorch3d\n",
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"2.1.\") and sys.platform.startswith(\"linux\"):\n", " pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" # We try to install PyTorch3D via a released wheel.\n", " version_str=\"\".join([\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" version_str=\"\".join([\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"_pyt{pyt_version_str}\"\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " ])\n",
" f\"_pyt{pyt_version_str}\"\n", " !pip install iopath\n",
" ])\n", " if sys.platform.startswith(\"linux\"):\n",
" !pip install fvcore iopath\n", " print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " pip_list = !pip freeze\n",
" # We try to install PyTorch3D from source.\n", " need_pytorch3d = not any(i.startswith(\"pytorch3d==\") for i in pip_list)\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'" " if need_pytorch3d:\n",
" print(f\"failed to find/install wheel for {version_str}\")\n",
"if need_pytorch3d:\n",
" print(\"Installing PyTorch3D from source\")\n",
" !pip install ninja\n",
" !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'"
] ]
}, },
{ {

View File

@@ -4,10 +4,11 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
import argparse
import os.path import os.path
import runpy import runpy
import subprocess import subprocess
from typing import List from typing import List, Tuple
# required env vars: # required env vars:
# CU_VERSION: E.g. cu112 # CU_VERSION: E.g. cu112
@@ -23,7 +24,7 @@ pytorch_major_minor = tuple(int(i) for i in PYTORCH_VERSION.split(".")[:2])
source_root_dir = os.environ["PWD"] source_root_dir = os.environ["PWD"]
def version_constraint(version): def version_constraint(version) -> str:
""" """
Given version "11.3" returns " >=11.3,<11.4" Given version "11.3" returns " >=11.3,<11.4"
""" """
@@ -32,7 +33,7 @@ def version_constraint(version):
return f" >={version},<{upper}" return f" >={version},<{upper}"
def get_cuda_major_minor(): def get_cuda_major_minor() -> Tuple[str, str]:
if CU_VERSION == "cpu": if CU_VERSION == "cpu":
raise ValueError("fn only for cuda builds") raise ValueError("fn only for cuda builds")
if len(CU_VERSION) != 5 or CU_VERSION[:2] != "cu": if len(CU_VERSION) != 5 or CU_VERSION[:2] != "cu":
@@ -42,11 +43,10 @@ def get_cuda_major_minor():
return major, minor return major, minor
def setup_cuda(): def setup_cuda(use_conda_cuda: bool) -> List[str]:
if CU_VERSION == "cpu": if CU_VERSION == "cpu":
return return []
major, minor = get_cuda_major_minor() major, minor = get_cuda_major_minor()
os.environ["CUDA_HOME"] = f"/usr/local/cuda-{major}.{minor}/"
os.environ["FORCE_CUDA"] = "1" os.environ["FORCE_CUDA"] = "1"
basic_nvcc_flags = ( basic_nvcc_flags = (
@@ -75,11 +75,26 @@ def setup_cuda():
if os.environ.get("JUST_TESTRUN", "0") != "1": if os.environ.get("JUST_TESTRUN", "0") != "1":
os.environ["NVCC_FLAGS"] = nvcc_flags os.environ["NVCC_FLAGS"] = nvcc_flags
if use_conda_cuda:
os.environ["CONDA_CUDA_TOOLKIT_BUILD_CONSTRAINT1"] = "- cuda-toolkit"
os.environ["CONDA_CUDA_TOOLKIT_BUILD_CONSTRAINT2"] = (
f"- cuda-version={major}.{minor}"
)
return ["-c", f"nvidia/label/cuda-{major}.{minor}.0"]
else:
os.environ["CUDA_HOME"] = f"/usr/local/cuda-{major}.{minor}/"
return []
def setup_conda_pytorch_constraint() -> List[str]: def setup_conda_pytorch_constraint() -> List[str]:
pytorch_constraint = f"- pytorch=={PYTORCH_VERSION}" pytorch_constraint = f"- pytorch=={PYTORCH_VERSION}"
os.environ["CONDA_PYTORCH_CONSTRAINT"] = pytorch_constraint os.environ["CONDA_PYTORCH_CONSTRAINT"] = pytorch_constraint
if pytorch_major_minor < (2, 2):
os.environ["CONDA_PYTORCH_MKL_CONSTRAINT"] = "- mkl!=2024.1.0"
os.environ["SETUPTOOLS_CONSTRAINT"] = "- setuptools<70"
else:
os.environ["CONDA_PYTORCH_MKL_CONSTRAINT"] = ""
os.environ["SETUPTOOLS_CONSTRAINT"] = "- setuptools"
os.environ["CONDA_PYTORCH_BUILD_CONSTRAINT"] = pytorch_constraint os.environ["CONDA_PYTORCH_BUILD_CONSTRAINT"] = pytorch_constraint
os.environ["PYTORCH_VERSION_NODOT"] = PYTORCH_VERSION.replace(".", "") os.environ["PYTORCH_VERSION_NODOT"] = PYTORCH_VERSION.replace(".", "")
@@ -89,7 +104,7 @@ def setup_conda_pytorch_constraint() -> List[str]:
return ["-c", "pytorch", "-c", "nvidia"] return ["-c", "pytorch", "-c", "nvidia"]
def setup_conda_cudatoolkit_constraint(): def setup_conda_cudatoolkit_constraint() -> None:
if CU_VERSION == "cpu": if CU_VERSION == "cpu":
os.environ["CONDA_CPUONLY_FEATURE"] = "- cpuonly" os.environ["CONDA_CPUONLY_FEATURE"] = "- cpuonly"
os.environ["CONDA_CUDATOOLKIT_CONSTRAINT"] = "" os.environ["CONDA_CUDATOOLKIT_CONSTRAINT"] = ""
@@ -110,14 +125,14 @@ def setup_conda_cudatoolkit_constraint():
os.environ["CONDA_CUDATOOLKIT_CONSTRAINT"] = toolkit os.environ["CONDA_CUDATOOLKIT_CONSTRAINT"] = toolkit
def do_build(start_args: List[str]): def do_build(start_args: List[str]) -> None:
args = start_args.copy() args = start_args.copy()
test_flag = os.environ.get("TEST_FLAG") test_flag = os.environ.get("TEST_FLAG")
if test_flag is not None: if test_flag is not None:
args.append(test_flag) args.append(test_flag)
args.extend(["-c", "bottler", "-c", "fvcore", "-c", "iopath", "-c", "conda-forge"]) args.extend(["-c", "bottler", "-c", "iopath", "-c", "conda-forge"])
args.append("--no-anaconda-upload") args.append("--no-anaconda-upload")
args.extend(["--python", os.environ["PYTHON_VERSION"]]) args.extend(["--python", os.environ["PYTHON_VERSION"]])
args.append("packaging/pytorch3d") args.append("packaging/pytorch3d")
@@ -126,8 +141,16 @@ def do_build(start_args: List[str]):
if __name__ == "__main__": if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Build the conda package.")
parser.add_argument(
"--use-conda-cuda",
action="store_true",
help="get cuda from conda ignoring local cuda",
)
our_args = parser.parse_args()
args = ["conda", "build"] args = ["conda", "build"]
setup_cuda() args += setup_cuda(use_conda_cuda=our_args.use_conda_cuda)
init_path = source_root_dir + "/pytorch3d/__init__.py" init_path = source_root_dir + "/pytorch3d/__init__.py"
build_version = runpy.run_path(init_path)["__version__"] build_version = runpy.run_path(init_path)["__version__"]

View File

@@ -26,6 +26,6 @@ version_str="".join([
torch.version.cuda.replace(".",""), torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}" f"_pyt{pyt_version_str}"
]) ])
!pip install fvcore iopath !pip install iopath
!pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
``` ```

View File

@@ -144,7 +144,7 @@ do
conda activate "$tag" conda activate "$tag"
# shellcheck disable=SC2086 # shellcheck disable=SC2086
conda install -y -c pytorch $extra_channel "pytorch=$pytorch_version" "$cudatools=$CUDA_TAG" conda install -y -c pytorch $extra_channel "pytorch=$pytorch_version" "$cudatools=$CUDA_TAG"
pip install fvcore iopath pip install iopath
echo "python version" "$python_version" "pytorch version" "$pytorch_version" "cuda version" "$cu_version" "tag" "$tag" echo "python version" "$python_version" "pytorch version" "$pytorch_version" "cuda version" "$cu_version" "tag" "$tag"
rm -rf dist rm -rf dist

View File

@@ -8,12 +8,16 @@ source:
requirements: requirements:
build: build:
- {{ compiler('c') }} # [win] - {{ compiler('c') }} # [win]
{{ environ.get('CONDA_CUDA_TOOLKIT_BUILD_CONSTRAINT1', '') }}
{{ environ.get('CONDA_CUDA_TOOLKIT_BUILD_CONSTRAINT2', '') }}
{{ environ.get('CONDA_CUB_CONSTRAINT') }} {{ environ.get('CONDA_CUB_CONSTRAINT') }}
host: host:
- python - python
- setuptools - mkl =2023 # [x86_64]
{{ environ.get('SETUPTOOLS_CONSTRAINT') }}
{{ environ.get('CONDA_PYTORCH_BUILD_CONSTRAINT') }} {{ environ.get('CONDA_PYTORCH_BUILD_CONSTRAINT') }}
{{ environ.get('CONDA_PYTORCH_MKL_CONSTRAINT') }}
{{ environ.get('CONDA_CUDATOOLKIT_CONSTRAINT') }} {{ environ.get('CONDA_CUDATOOLKIT_CONSTRAINT') }}
{{ environ.get('CONDA_CPUONLY_FEATURE') }} {{ environ.get('CONDA_CPUONLY_FEATURE') }}
@@ -21,13 +25,14 @@ requirements:
- python - python
- numpy >=1.11 - numpy >=1.11
- torchvision >=0.5 - torchvision >=0.5
- fvcore - mkl =2023 # [x86_64]
- iopath - iopath
{{ environ.get('CONDA_PYTORCH_CONSTRAINT') }} {{ environ.get('CONDA_PYTORCH_CONSTRAINT') }}
{{ environ.get('CONDA_CUDATOOLKIT_CONSTRAINT') }} {{ environ.get('CONDA_CUDATOOLKIT_CONSTRAINT') }}
build: build:
string: py{{py}}_{{ environ['CU_VERSION'] }}_pyt{{ environ['PYTORCH_VERSION_NODOT']}} string: py{{py}}_{{ environ['CU_VERSION'] }}_pyt{{ environ['PYTORCH_VERSION_NODOT']}}
# script: LD_LIBRARY_PATH=$PREFIX/lib:$BUILD_PREFIX/lib:$LD_LIBRARY_PATH python setup.py install --single-version-externally-managed --record=record.txt # [not win]
script: python setup.py install --single-version-externally-managed --record=record.txt # [not win] script: python setup.py install --single-version-externally-managed --record=record.txt # [not win]
script_env: script_env:
- CUDA_HOME - CUDA_HOME
@@ -47,6 +52,10 @@ test:
- imageio - imageio
- hydra-core - hydra-core
- accelerate - accelerate
- matplotlib
- tabulate
- pandas
- sqlalchemy
commands: commands:
#pytest . #pytest .
python -m unittest discover -v -s tests -t . python -m unittest discover -v -s tests -t .

View File

@@ -3,3 +3,5 @@
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe

View File

@@ -5,6 +5,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
"""" """"
This file is the entry point for launching experiments with Implicitron. This file is the entry point for launching experiments with Implicitron.
@@ -97,7 +99,7 @@ except ModuleNotFoundError:
no_accelerate = os.environ.get("PYTORCH3D_NO_ACCELERATE") is not None no_accelerate = os.environ.get("PYTORCH3D_NO_ACCELERATE") is not None
class Experiment(Configurable): # pyre-ignore: 13 class Experiment(Configurable):
""" """
This class is at the top level of Implicitron's config hierarchy. Its This class is at the top level of Implicitron's config hierarchy. Its
members are high-level components necessary for training an implicit rende- members are high-level components necessary for training an implicit rende-
@@ -118,12 +120,16 @@ class Experiment(Configurable): # pyre-ignore: 13
will be saved here. will be saved here.
""" """
# pyre-fixme[13]: Attribute `data_source` is never initialized.
data_source: DataSourceBase data_source: DataSourceBase
data_source_class_type: str = "ImplicitronDataSource" data_source_class_type: str = "ImplicitronDataSource"
# pyre-fixme[13]: Attribute `model_factory` is never initialized.
model_factory: ModelFactoryBase model_factory: ModelFactoryBase
model_factory_class_type: str = "ImplicitronModelFactory" model_factory_class_type: str = "ImplicitronModelFactory"
# pyre-fixme[13]: Attribute `optimizer_factory` is never initialized.
optimizer_factory: OptimizerFactoryBase optimizer_factory: OptimizerFactoryBase
optimizer_factory_class_type: str = "ImplicitronOptimizerFactory" optimizer_factory_class_type: str = "ImplicitronOptimizerFactory"
# pyre-fixme[13]: Attribute `training_loop` is never initialized.
training_loop: TrainingLoopBase training_loop: TrainingLoopBase
training_loop_class_type: str = "ImplicitronTrainingLoop" training_loop_class_type: str = "ImplicitronTrainingLoop"

View File

@@ -3,3 +3,5 @@
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import logging import logging
import os import os
from typing import Optional from typing import Optional
@@ -43,7 +45,7 @@ class ModelFactoryBase(ReplaceableBase):
@registry.register @registry.register
class ImplicitronModelFactory(ModelFactoryBase): # pyre-ignore [13] class ImplicitronModelFactory(ModelFactoryBase):
""" """
A factory class that initializes an implicit rendering model. A factory class that initializes an implicit rendering model.
@@ -59,6 +61,7 @@ class ImplicitronModelFactory(ModelFactoryBase): # pyre-ignore [13]
""" """
# pyre-fixme[13]: Attribute `model` is never initialized.
model: ImplicitronModelBase model: ImplicitronModelBase
model_class_type: str = "GenericModel" model_class_type: str = "GenericModel"
resume: bool = True resume: bool = True

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import inspect import inspect
import logging import logging
import os import os
@@ -121,7 +123,6 @@ class ImplicitronOptimizerFactory(OptimizerFactoryBase):
""" """
# Get the parameters to optimize # Get the parameters to optimize
if hasattr(model, "_get_param_groups"): # use the model function if hasattr(model, "_get_param_groups"): # use the model function
# pyre-ignore[29]
p_groups = model._get_param_groups(self.lr, wd=self.weight_decay) p_groups = model._get_param_groups(self.lr, wd=self.weight_decay)
else: else:
p_groups = [ p_groups = [

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import logging import logging
import os import os
import time import time
@@ -28,13 +30,13 @@ from .utils import seed_all_random_engines
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# pyre-fixme[13]: Attribute `evaluator` is never initialized.
class TrainingLoopBase(ReplaceableBase): class TrainingLoopBase(ReplaceableBase):
""" """
Members: Members:
evaluator: An EvaluatorBase instance, used to evaluate training results. evaluator: An EvaluatorBase instance, used to evaluate training results.
""" """
# pyre-fixme[13]: Attribute `evaluator` is never initialized.
evaluator: Optional[EvaluatorBase] evaluator: Optional[EvaluatorBase]
evaluator_class_type: Optional[str] = "ImplicitronEvaluator" evaluator_class_type: Optional[str] = "ImplicitronEvaluator"
@@ -110,6 +112,8 @@ class ImplicitronTrainingLoop(TrainingLoopBase):
def __post_init__(self): def __post_init__(self):
run_auto_creation(self) run_auto_creation(self)
# pyre-fixme[14]: `run` overrides method defined in `TrainingLoopBase`
# inconsistently.
def run( def run(
self, self,
*, *,
@@ -391,7 +395,6 @@ class ImplicitronTrainingLoop(TrainingLoopBase):
): ):
prefix = f"e{stats.epoch}_it{stats.it[trainmode]}" prefix = f"e{stats.epoch}_it{stats.it[trainmode]}"
if hasattr(model, "visualize"): if hasattr(model, "visualize"):
# pyre-ignore [29]
model.visualize( model.visualize(
viz, viz,
visdom_env_imgs, visdom_env_imgs,

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import random import random

View File

@@ -3,3 +3,5 @@
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import os import os
import tempfile import tempfile
import unittest import unittest

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import logging import logging
import os import os
import unittest import unittest

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import os import os
import unittest import unittest

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import contextlib import contextlib
import logging import logging
import os import os

View File

@@ -5,6 +5,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
""" """
Script to visualize a previously trained model. Example call: Script to visualize a previously trained model. Example call:

View File

@@ -343,12 +343,14 @@ class RadianceFieldRenderer(torch.nn.Module):
# For a full render pass concatenate the output chunks, # For a full render pass concatenate the output chunks,
# and reshape to image size. # and reshape to image size.
out = { out = {
k: torch.cat( k: (
[ch_o[k] for ch_o in chunk_outputs], torch.cat(
dim=1, [ch_o[k] for ch_o in chunk_outputs],
).view(-1, *self._image_size, 3) dim=1,
if chunk_outputs[0][k] is not None ).view(-1, *self._image_size, 3)
else None if chunk_outputs[0][k] is not None
else None
)
for k in ("rgb_fine", "rgb_coarse", "rgb_gt") for k in ("rgb_fine", "rgb_coarse", "rgb_gt")
} }
else: else:

View File

@@ -4,4 +4,6 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
__version__ = "0.7.5" # pyre-unsafe
__version__ = "0.7.8"

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
from .datatypes import Device, get_device, make_device from .datatypes import Device, get_device, make_device

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
from typing import Sequence, Tuple, Union from typing import Sequence, Tuple, Union
import torch import torch

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
from typing import Optional, Union from typing import Optional, Union
import torch import torch

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import math import math
from typing import Tuple from typing import Tuple

View File

@@ -4,5 +4,7 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
from .symeig3x3 import symeig3x3 from .symeig3x3 import symeig3x3
from .utils import _safe_det_3x3 from .utils import _safe_det_3x3

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import math import math
from typing import Optional, Tuple from typing import Optional, Tuple

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import torch import torch

View File

@@ -99,6 +99,7 @@ PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("marching_cubes", &MarchingCubes); m.def("marching_cubes", &MarchingCubes);
// Pulsar. // Pulsar.
// Pulsar not enabled on AMD.
#ifdef PULSAR_LOGGING_ENABLED #ifdef PULSAR_LOGGING_ENABLED
c10::ShowLogInfoToStderr(); c10::ShowLogInfoToStderr();
#endif #endif

View File

@@ -338,7 +338,7 @@ std::tuple<at::Tensor, at::Tensor> KNearestNeighborIdxCuda(
TORCH_CHECK((norm == 1) || (norm == 2), "Norm must be 1 or 2."); TORCH_CHECK((norm == 1) || (norm == 2), "Norm must be 1 or 2.");
TORCH_CHECK(p2.size(2) == D, "Point sets must have the same last dimension"); TORCH_CHECK(p1.size(2) == D, "Point sets must have the same last dimension");
auto long_dtype = lengths1.options().dtype(at::kLong); auto long_dtype = lengths1.options().dtype(at::kLong);
auto idxs = at::zeros({N, P1, K}, long_dtype); auto idxs = at::zeros({N, P1, K}, long_dtype);
auto dists = at::zeros({N, P1, K}, p1.options()); auto dists = at::zeros({N, P1, K}, p1.options());
@@ -495,7 +495,7 @@ __global__ void KNearestNeighborBackwardKernel(
if ((p1_idx < num1) && (k < num2)) { if ((p1_idx < num1) && (k < num2)) {
const float grad_dist = grad_dists[n * P1 * K + p1_idx * K + k]; const float grad_dist = grad_dists[n * P1 * K + p1_idx * K + k];
// index of point in p2 corresponding to the k-th nearest neighbor // index of point in p2 corresponding to the k-th nearest neighbor
const size_t p2_idx = idxs[n * P1 * K + p1_idx * K + k]; const int64_t p2_idx = idxs[n * P1 * K + p1_idx * K + k];
// If the index is the pad value of -1 then ignore it // If the index is the pad value of -1 then ignore it
if (p2_idx == -1) { if (p2_idx == -1) {
continue; continue;

View File

@@ -223,7 +223,7 @@ __global__ void CompactVoxelsKernel(
compactedVoxelArray, compactedVoxelArray,
const at::PackedTensorAccessor32<int, 1, at::RestrictPtrTraits> const at::PackedTensorAccessor32<int, 1, at::RestrictPtrTraits>
voxelOccupied, voxelOccupied,
const at::PackedTensorAccessor32<int, 1, at::RestrictPtrTraits> const at::PackedTensorAccessor32<int64_t, 1, at::RestrictPtrTraits>
voxelOccupiedScan, voxelOccupiedScan,
uint numVoxels) { uint numVoxels) {
uint id = blockIdx.x * blockDim.x + threadIdx.x; uint id = blockIdx.x * blockDim.x + threadIdx.x;
@@ -255,7 +255,8 @@ __global__ void GenerateFacesKernel(
at::PackedTensorAccessor<int64_t, 1, at::RestrictPtrTraits> ids, at::PackedTensorAccessor<int64_t, 1, at::RestrictPtrTraits> ids,
at::PackedTensorAccessor32<int, 1, at::RestrictPtrTraits> at::PackedTensorAccessor32<int, 1, at::RestrictPtrTraits>
compactedVoxelArray, compactedVoxelArray,
at::PackedTensorAccessor32<int, 1, at::RestrictPtrTraits> numVertsScanned, at::PackedTensorAccessor32<int64_t, 1, at::RestrictPtrTraits>
numVertsScanned,
const uint activeVoxels, const uint activeVoxels,
const at::PackedTensorAccessor32<float, 3, at::RestrictPtrTraits> vol, const at::PackedTensorAccessor32<float, 3, at::RestrictPtrTraits> vol,
const at::PackedTensorAccessor32<int, 2, at::RestrictPtrTraits> faceTable, const at::PackedTensorAccessor32<int, 2, at::RestrictPtrTraits> faceTable,
@@ -381,6 +382,44 @@ __global__ void GenerateFacesKernel(
} // end for grid-strided kernel } // end for grid-strided kernel
} }
// ATen/Torch does not have an exclusive-scan operator. Additionally, in the
// code below we need to get the "total number of items to work on" after
// a scan, which with an inclusive-scan would simply be the value of the last
// element in the tensor.
//
// This utility function hits two birds with one stone, by running
// an inclusive-scan into a right-shifted view of a tensor that's
// allocated to be one element bigger than the input tensor.
//
// Note; return tensor is `int64_t` per element, even if the input
// tensor is only 32-bit. Also, the return tensor is one element bigger
// than the input one.
//
// Secondary optional argument is an output argument that gets the
// value of the last element of the return tensor (because you almost
// always need this CPU-side right after this function anyway).
static at::Tensor ExclusiveScanAndTotal(
const at::Tensor& inTensor,
int64_t* optTotal = nullptr) {
const auto inSize = inTensor.sizes()[0];
auto retTensor = at::zeros({inSize + 1}, at::kLong).to(inTensor.device());
using at::indexing::None;
using at::indexing::Slice;
auto rightShiftedView = retTensor.index({Slice(1, None)});
// Do an (inclusive-scan) cumulative sum in to the view that's
// shifted one element to the right...
at::cumsum_out(rightShiftedView, inTensor, 0, at::kLong);
if (optTotal) {
*optTotal = retTensor[inSize].cpu().item<int64_t>();
}
// ...so that the not-shifted tensor holds the exclusive-scan
return retTensor;
}
// Entrance for marching cubes cuda extension. Marching Cubes is an algorithm to // Entrance for marching cubes cuda extension. Marching Cubes is an algorithm to
// create triangle meshes from an implicit function (one of the form f(x, y, z) // create triangle meshes from an implicit function (one of the form f(x, y, z)
// = 0). It works by iteratively checking a grid of cubes superimposed over a // = 0). It works by iteratively checking a grid of cubes superimposed over a
@@ -443,20 +482,18 @@ std::tuple<at::Tensor, at::Tensor, at::Tensor> MarchingCubesCuda(
using at::indexing::Slice; using at::indexing::Slice;
auto d_voxelVerts = auto d_voxelVerts =
at::zeros({numVoxels + 1}, at::TensorOptions().dtype(at::kInt)) at::zeros({numVoxels}, at::TensorOptions().dtype(at::kInt))
.to(vol.device()); .to(vol.device());
auto d_voxelVerts_ = d_voxelVerts.index({Slice(1, None)});
auto d_voxelOccupied = auto d_voxelOccupied =
at::zeros({numVoxels + 1}, at::TensorOptions().dtype(at::kInt)) at::zeros({numVoxels}, at::TensorOptions().dtype(at::kInt))
.to(vol.device()); .to(vol.device());
auto d_voxelOccupied_ = d_voxelOccupied.index({Slice(1, None)});
// Execute "ClassifyVoxelKernel" kernel to precompute // Execute "ClassifyVoxelKernel" kernel to precompute
// two arrays - d_voxelOccupied and d_voxelVertices to global memory, // two arrays - d_voxelOccupied and d_voxelVertices to global memory,
// which stores the occupancy state and number of voxel vertices per voxel. // which stores the occupancy state and number of voxel vertices per voxel.
ClassifyVoxelKernel<<<grid, threads, 0, stream>>>( ClassifyVoxelKernel<<<grid, threads, 0, stream>>>(
d_voxelVerts_.packed_accessor32<int, 1, at::RestrictPtrTraits>(), d_voxelVerts.packed_accessor32<int, 1, at::RestrictPtrTraits>(),
d_voxelOccupied_.packed_accessor32<int, 1, at::RestrictPtrTraits>(), d_voxelOccupied.packed_accessor32<int, 1, at::RestrictPtrTraits>(),
vol.packed_accessor32<float, 3, at::RestrictPtrTraits>(), vol.packed_accessor32<float, 3, at::RestrictPtrTraits>(),
isolevel); isolevel);
AT_CUDA_CHECK(cudaGetLastError()); AT_CUDA_CHECK(cudaGetLastError());
@@ -466,12 +503,9 @@ std::tuple<at::Tensor, at::Tensor, at::Tensor> MarchingCubesCuda(
// count for voxels in the grid and compute the number of active voxels. // count for voxels in the grid and compute the number of active voxels.
// If the number of active voxels is 0, return zero tensor for verts and // If the number of active voxels is 0, return zero tensor for verts and
// faces. // faces.
int64_t activeVoxels = 0;
auto d_voxelOccupiedScan = at::cumsum(d_voxelOccupied, 0); auto d_voxelOccupiedScan =
auto d_voxelOccupiedScan_ = d_voxelOccupiedScan.index({Slice(1, None)}); ExclusiveScanAndTotal(d_voxelOccupied, &activeVoxels);
// number of active voxels
int activeVoxels = d_voxelOccupiedScan[numVoxels].cpu().item<int>();
const int device_id = vol.device().index(); const int device_id = vol.device().index();
auto opt = at::TensorOptions().dtype(at::kInt).device(at::kCUDA, device_id); auto opt = at::TensorOptions().dtype(at::kInt).device(at::kCUDA, device_id);
@@ -486,23 +520,21 @@ std::tuple<at::Tensor, at::Tensor, at::Tensor> MarchingCubesCuda(
return std::make_tuple(verts, faces, ids); return std::make_tuple(verts, faces, ids);
} }
// Execute "CompactVoxelsKernel" kernel to compress voxels for accleration. // Execute "CompactVoxelsKernel" kernel to compress voxels for acceleration.
// This allows us to run triangle generation on only the occupied voxels. // This allows us to run triangle generation on only the occupied voxels.
auto d_compVoxelArray = at::zeros({activeVoxels}, opt); auto d_compVoxelArray = at::zeros({activeVoxels}, opt);
CompactVoxelsKernel<<<grid, threads, 0, stream>>>( CompactVoxelsKernel<<<grid, threads, 0, stream>>>(
d_compVoxelArray.packed_accessor32<int, 1, at::RestrictPtrTraits>(), d_compVoxelArray.packed_accessor32<int, 1, at::RestrictPtrTraits>(),
d_voxelOccupied.packed_accessor32<int, 1, at::RestrictPtrTraits>(), d_voxelOccupied.packed_accessor32<int, 1, at::RestrictPtrTraits>(),
d_voxelOccupiedScan_.packed_accessor32<int, 1, at::RestrictPtrTraits>(), d_voxelOccupiedScan
.packed_accessor32<int64_t, 1, at::RestrictPtrTraits>(),
numVoxels); numVoxels);
AT_CUDA_CHECK(cudaGetLastError()); AT_CUDA_CHECK(cudaGetLastError());
cudaDeviceSynchronize(); cudaDeviceSynchronize();
// Scan d_voxelVerts array to generate offsets of vertices for each voxel // Scan d_voxelVerts array to generate offsets of vertices for each voxel
auto d_voxelVertsScan = at::cumsum(d_voxelVerts, 0); int64_t totalVerts = 0;
auto d_voxelVertsScan_ = d_voxelVertsScan.index({Slice(1, None)}); auto d_voxelVertsScan = ExclusiveScanAndTotal(d_voxelVerts, &totalVerts);
// total number of vertices
int totalVerts = d_voxelVertsScan[numVoxels].cpu().item<int>();
// Execute "GenerateFacesKernel" kernel // Execute "GenerateFacesKernel" kernel
// This runs only on the occupied voxels. // This runs only on the occupied voxels.
@@ -522,7 +554,7 @@ std::tuple<at::Tensor, at::Tensor, at::Tensor> MarchingCubesCuda(
faces.packed_accessor<int64_t, 2, at::RestrictPtrTraits>(), faces.packed_accessor<int64_t, 2, at::RestrictPtrTraits>(),
ids.packed_accessor<int64_t, 1, at::RestrictPtrTraits>(), ids.packed_accessor<int64_t, 1, at::RestrictPtrTraits>(),
d_compVoxelArray.packed_accessor32<int, 1, at::RestrictPtrTraits>(), d_compVoxelArray.packed_accessor32<int, 1, at::RestrictPtrTraits>(),
d_voxelVertsScan_.packed_accessor32<int, 1, at::RestrictPtrTraits>(), d_voxelVertsScan.packed_accessor32<int64_t, 1, at::RestrictPtrTraits>(),
activeVoxels, activeVoxels,
vol.packed_accessor32<float, 3, at::RestrictPtrTraits>(), vol.packed_accessor32<float, 3, at::RestrictPtrTraits>(),
faceTable.packed_accessor32<int, 2, at::RestrictPtrTraits>(), faceTable.packed_accessor32<int, 2, at::RestrictPtrTraits>(),

View File

@@ -71,8 +71,8 @@ std::tuple<at::Tensor, at::Tensor, at::Tensor> MarchingCubesCpu(
if ((j + 1) % 3 == 0 && ps[0] != ps[1] && ps[1] != ps[2] && if ((j + 1) % 3 == 0 && ps[0] != ps[1] && ps[1] != ps[2] &&
ps[2] != ps[0]) { ps[2] != ps[0]) {
for (int k = 0; k < 3; k++) { for (int k = 0; k < 3; k++) {
int v = tri[k]; int64_t v = tri.at(k);
edge_id_to_v[tri.at(k)] = ps.at(k); edge_id_to_v[v] = ps.at(k);
if (!uniq_edge_id.count(v)) { if (!uniq_edge_id.count(v)) {
uniq_edge_id[v] = verts.size(); uniq_edge_id[v] = verts.size();
verts.push_back(edge_id_to_v[v]); verts.push_back(edge_id_to_v[v]);

View File

@@ -30,11 +30,20 @@
#define GLOBAL __global__ #define GLOBAL __global__
#define RESTRICT __restrict__ #define RESTRICT __restrict__
#define DEBUGBREAK() #define DEBUGBREAK()
#ifdef __NVCC_DIAG_PRAGMA_SUPPORT__
#pragma nv_diag_suppress 1866
#pragma nv_diag_suppress 2941
#pragma nv_diag_suppress 2951
#pragma nv_diag_suppress 2967
#else
#if !defined(USE_ROCM)
#pragma diag_suppress = attribute_not_allowed #pragma diag_suppress = attribute_not_allowed
#pragma diag_suppress = 1866 #pragma diag_suppress = 1866
#pragma diag_suppress = 2941 #pragma diag_suppress = 2941
#pragma diag_suppress = 2951 #pragma diag_suppress = 2951
#pragma diag_suppress = 2967 #pragma diag_suppress = 2967
#endif //! USE_ROCM
#endif
#else // __CUDACC__ #else // __CUDACC__
#define INLINE inline #define INLINE inline
#define HOST #define HOST
@@ -49,6 +58,9 @@
#pragma clang diagnostic pop #pragma clang diagnostic pop
#ifdef WITH_CUDA #ifdef WITH_CUDA
#include <ATen/cuda/CUDAContext.h> #include <ATen/cuda/CUDAContext.h>
#if !defined(USE_ROCM)
#include <vector_functions.h>
#endif //! USE_ROCM
#else #else
#ifndef cudaStream_t #ifndef cudaStream_t
typedef void* cudaStream_t; typedef void* cudaStream_t;
@@ -65,8 +77,6 @@ struct float2 {
struct float3 { struct float3 {
float x, y, z; float x, y, z;
}; };
#endif
namespace py = pybind11;
inline float3 make_float3(const float& x, const float& y, const float& z) { inline float3 make_float3(const float& x, const float& y, const float& z) {
float3 res; float3 res;
res.x = x; res.x = x;
@@ -74,6 +84,8 @@ inline float3 make_float3(const float& x, const float& y, const float& z) {
res.z = z; res.z = z;
return res; return res;
} }
#endif
namespace py = pybind11;
inline bool operator==(const float3& a, const float3& b) { inline bool operator==(const float3& a, const float3& b) {
return a.x == b.x && a.y == b.y && a.z == b.z; return a.x == b.x && a.y == b.y && a.z == b.z;

View File

@@ -59,6 +59,11 @@ getLastCudaError(const char* errorMessage, const char* file, const int line) {
#define SHARED __shared__ #define SHARED __shared__
#define ACTIVEMASK() __activemask() #define ACTIVEMASK() __activemask()
#define BALLOT(mask, val) __ballot_sync((mask), val) #define BALLOT(mask, val) __ballot_sync((mask), val)
/* TODO (ROCM-6.2): None of the WARP_* are used anywhere and ROCM-6.2 natively
* supports __shfl_*. Disabling until the move to ROCM-6.2.
*/
#if !defined(USE_ROCM)
/** /**
* Find the cumulative sum within a warp up to the current * Find the cumulative sum within a warp up to the current
* thread lane, with each mask thread contributing base. * thread lane, with each mask thread contributing base.
@@ -115,6 +120,7 @@ INLINE DEVICE float3 WARP_SUM_FLOAT3(
ret.z = WARP_SUM(group, mask, base.z); ret.z = WARP_SUM(group, mask, base.z);
return ret; return ret;
} }
#endif //! USE_ROCM
// Floating point. // Floating point.
// #define FMUL(a, b) __fmul_rn((a), (b)) // #define FMUL(a, b) __fmul_rn((a), (b))
@@ -142,6 +148,7 @@ INLINE DEVICE float3 WARP_SUM_FLOAT3(
#define FMA(x, y, z) __fmaf_rn((x), (y), (z)) #define FMA(x, y, z) __fmaf_rn((x), (y), (z))
#define I2F(a) __int2float_rn(a) #define I2F(a) __int2float_rn(a)
#define FRCP(x) __frcp_rn(x) #define FRCP(x) __frcp_rn(x)
#if !defined(USE_ROCM)
__device__ static float atomicMax(float* address, float val) { __device__ static float atomicMax(float* address, float val) {
int* address_as_i = (int*)address; int* address_as_i = (int*)address;
int old = *address_as_i, assumed; int old = *address_as_i, assumed;
@@ -166,6 +173,7 @@ __device__ static float atomicMin(float* address, float val) {
} while (assumed != old); } while (assumed != old);
return __int_as_float(old); return __int_as_float(old);
} }
#endif //! USE_ROCM
#define DMAX(a, b) FMAX(a, b) #define DMAX(a, b) FMAX(a, b)
#define DMIN(a, b) FMIN(a, b) #define DMIN(a, b) FMIN(a, b)
#define DSQRT(a) sqrt(a) #define DSQRT(a) sqrt(a)

View File

@@ -357,11 +357,11 @@ void MAX_WS(
// //
// //
#define END_PARALLEL() \ #define END_PARALLEL() \
end_parallel:; \ end_parallel :; \
} }
#define END_PARALLEL_NORET() } #define END_PARALLEL_NORET() }
#define END_PARALLEL_2D() \ #define END_PARALLEL_2D() \
end_parallel:; \ end_parallel :; \
} \ } \
} }
#define END_PARALLEL_2D_NORET() \ #define END_PARALLEL_2D_NORET() \

View File

@@ -14,7 +14,7 @@
#include "./commands.h" #include "./commands.h"
namespace pulsar { namespace pulsar {
IHD CamGradInfo::CamGradInfo() { IHD CamGradInfo::CamGradInfo(int x) {
cam_pos = make_float3(0.f, 0.f, 0.f); cam_pos = make_float3(0.f, 0.f, 0.f);
pixel_0_0_center = make_float3(0.f, 0.f, 0.f); pixel_0_0_center = make_float3(0.f, 0.f, 0.f);
pixel_dir_x = make_float3(0.f, 0.f, 0.f); pixel_dir_x = make_float3(0.f, 0.f, 0.f);

View File

@@ -63,7 +63,7 @@ inline bool operator==(const CamInfo& a, const CamInfo& b) {
}; };
struct CamGradInfo { struct CamGradInfo {
HOST DEVICE CamGradInfo(); HOST DEVICE CamGradInfo(int = 0);
float3 cam_pos; float3 cam_pos;
float3 pixel_0_0_center; float3 pixel_0_0_center;
float3 pixel_dir_x; float3 pixel_dir_x;

View File

@@ -24,7 +24,7 @@
// #pragma diag_suppress = 68 // #pragma diag_suppress = 68
#include <ATen/cuda/CUDAContext.h> #include <ATen/cuda/CUDAContext.h>
// #pragma pop // #pragma pop
#include "../cuda/commands.h" #include "../gpu/commands.h"
#else #else
#pragma clang diagnostic push #pragma clang diagnostic push
#pragma clang diagnostic ignored "-Weverything" #pragma clang diagnostic ignored "-Weverything"

View File

@@ -46,6 +46,7 @@ IHD float3 outer_product_sum(const float3& a) {
} }
// TODO: put intrinsics here. // TODO: put intrinsics here.
#if !defined(USE_ROCM)
IHD float3 operator+(const float3& a, const float3& b) { IHD float3 operator+(const float3& a, const float3& b) {
return make_float3(a.x + b.x, a.y + b.y, a.z + b.z); return make_float3(a.x + b.x, a.y + b.y, a.z + b.z);
} }
@@ -93,6 +94,7 @@ IHD float3 operator*(const float3& a, const float3& b) {
IHD float3 operator*(const float& a, const float3& b) { IHD float3 operator*(const float& a, const float3& b) {
return b * a; return b * a;
} }
#endif //! USE_ROCM
INLINE DEVICE float length(const float3& v) { INLINE DEVICE float length(const float3& v) {
// TODO: benchmark what's faster. // TODO: benchmark what's faster.

View File

@@ -93,7 +93,7 @@ HOST void construct(
MALLOC(self->di_sorted_d, DrawInfo, max_num_balls); MALLOC(self->di_sorted_d, DrawInfo, max_num_balls);
MALLOC(self->region_flags_d, char, max_num_balls); MALLOC(self->region_flags_d, char, max_num_balls);
MALLOC(self->num_selected_d, size_t, 1); MALLOC(self->num_selected_d, size_t, 1);
MALLOC(self->forw_info_d, float, width* height*(3 + 2 * n_track)); MALLOC(self->forw_info_d, float, width* height * (3 + 2 * n_track));
MALLOC(self->min_max_pixels_d, IntersectInfo, 1); MALLOC(self->min_max_pixels_d, IntersectInfo, 1);
MALLOC(self->grad_pos_d, float3, max_num_balls); MALLOC(self->grad_pos_d, float3, max_num_balls);
MALLOC(self->grad_col_d, float, max_num_balls* n_channels); MALLOC(self->grad_col_d, float, max_num_balls* n_channels);

View File

@@ -99,7 +99,7 @@ GLOBAL void render(
/** Whether loading of balls is completed. */ /** Whether loading of balls is completed. */
SHARED bool loading_done; SHARED bool loading_done;
/** The number of balls loaded overall (just for statistics). */ /** The number of balls loaded overall (just for statistics). */
SHARED int n_balls_loaded; [[maybe_unused]] SHARED int n_balls_loaded;
/** The area this thread block covers. */ /** The area this thread block covers. */
SHARED IntersectInfo block_area; SHARED IntersectInfo block_area;
if (thread_block.thread_rank() == 0) { if (thread_block.thread_rank() == 0) {
@@ -283,9 +283,15 @@ GLOBAL void render(
(percent_allowed_difference > 0.f && (percent_allowed_difference > 0.f &&
max_closest_possible_intersection > depth_threshold) || max_closest_possible_intersection > depth_threshold) ||
tracker.get_n_hits() >= max_n_hits; tracker.get_n_hits() >= max_n_hits;
#if defined(__CUDACC__) && defined(__HIP_PLATFORM_AMD__)
unsigned long long warp_done = __ballot(done);
int warp_done_bit_cnt = __popcll(warp_done);
#else
uint warp_done = thread_warp.ballot(done); uint warp_done = thread_warp.ballot(done);
int warp_done_bit_cnt = POPC(warp_done);
#endif //__CUDACC__ && __HIP_PLATFORM_AMD__
if (thread_warp.thread_rank() == 0) if (thread_warp.thread_rank() == 0)
ATOMICADD_B(&n_pixels_done, POPC(warp_done)); ATOMICADD_B(&n_pixels_done, warp_done_bit_cnt);
// This sync is necessary to keep n_loaded until all threads are done with // This sync is necessary to keep n_loaded until all threads are done with
// painting. // painting.
thread_block.sync(); thread_block.sync();

View File

@@ -213,8 +213,8 @@ std::tuple<size_t, size_t, bool, torch::Tensor> Renderer::arg_check(
const float& gamma, const float& gamma,
const float& max_depth, const float& max_depth,
float& min_depth, float& min_depth,
const c10::optional<torch::Tensor>& bg_col, const std::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity, const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference, const float& percent_allowed_difference,
const uint& max_n_hits, const uint& max_n_hits,
const uint& mode) { const uint& mode) {
@@ -668,8 +668,8 @@ std::tuple<torch::Tensor, torch::Tensor> Renderer::forward(
const float& gamma, const float& gamma,
const float& max_depth, const float& max_depth,
float min_depth, float min_depth,
const c10::optional<torch::Tensor>& bg_col, const std::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity, const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference, const float& percent_allowed_difference,
const uint& max_n_hits, const uint& max_n_hits,
const uint& mode) { const uint& mode) {
@@ -888,14 +888,14 @@ std::tuple<torch::Tensor, torch::Tensor> Renderer::forward(
}; };
std::tuple< std::tuple<
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>> std::optional<torch::Tensor>>
Renderer::backward( Renderer::backward(
const torch::Tensor& grad_im, const torch::Tensor& grad_im,
const torch::Tensor& image, const torch::Tensor& image,
@@ -912,8 +912,8 @@ Renderer::backward(
const float& gamma, const float& gamma,
const float& max_depth, const float& max_depth,
float min_depth, float min_depth,
const c10::optional<torch::Tensor>& bg_col, const std::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity, const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference, const float& percent_allowed_difference,
const uint& max_n_hits, const uint& max_n_hits,
const uint& mode, const uint& mode,
@@ -922,7 +922,7 @@ Renderer::backward(
const bool& dif_rad, const bool& dif_rad,
const bool& dif_cam, const bool& dif_cam,
const bool& dif_opy, const bool& dif_opy,
const at::optional<std::pair<uint, uint>>& dbg_pos) { const std::optional<std::pair<uint, uint>>& dbg_pos) {
this->ensure_on_device(this->device_tracker.device()); this->ensure_on_device(this->device_tracker.device());
size_t batch_size; size_t batch_size;
size_t n_points; size_t n_points;
@@ -1045,14 +1045,14 @@ Renderer::backward(
} }
// Prepare the return value. // Prepare the return value.
std::tuple< std::tuple<
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>> std::optional<torch::Tensor>>
ret; ret;
if (mode == 1 || (!dif_pos && !dif_col && !dif_rad && !dif_cam && !dif_opy)) { if (mode == 1 || (!dif_pos && !dif_col && !dif_rad && !dif_cam && !dif_opy)) {
return ret; return ret;

View File

@@ -44,21 +44,21 @@ struct Renderer {
const float& gamma, const float& gamma,
const float& max_depth, const float& max_depth,
float min_depth, float min_depth,
const c10::optional<torch::Tensor>& bg_col, const std::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity, const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference, const float& percent_allowed_difference,
const uint& max_n_hits, const uint& max_n_hits,
const uint& mode); const uint& mode);
std::tuple< std::tuple<
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>, std::optional<torch::Tensor>,
at::optional<torch::Tensor>> std::optional<torch::Tensor>>
backward( backward(
const torch::Tensor& grad_im, const torch::Tensor& grad_im,
const torch::Tensor& image, const torch::Tensor& image,
@@ -75,8 +75,8 @@ struct Renderer {
const float& gamma, const float& gamma,
const float& max_depth, const float& max_depth,
float min_depth, float min_depth,
const c10::optional<torch::Tensor>& bg_col, const std::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity, const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference, const float& percent_allowed_difference,
const uint& max_n_hits, const uint& max_n_hits,
const uint& mode, const uint& mode,
@@ -85,7 +85,7 @@ struct Renderer {
const bool& dif_rad, const bool& dif_rad,
const bool& dif_cam, const bool& dif_cam,
const bool& dif_opy, const bool& dif_opy,
const at::optional<std::pair<uint, uint>>& dbg_pos); const std::optional<std::pair<uint, uint>>& dbg_pos);
// Infrastructure. // Infrastructure.
/** /**
@@ -115,8 +115,8 @@ struct Renderer {
const float& gamma, const float& gamma,
const float& max_depth, const float& max_depth,
float& min_depth, float& min_depth,
const c10::optional<torch::Tensor>& bg_col, const std::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity, const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference, const float& percent_allowed_difference,
const uint& max_n_hits, const uint& max_n_hits,
const uint& mode); const uint& mode);

View File

@@ -244,8 +244,7 @@ at::Tensor RasterizeCoarseCuda(
if (num_bins_y >= kMaxItemsPerBin || num_bins_x >= kMaxItemsPerBin) { if (num_bins_y >= kMaxItemsPerBin || num_bins_x >= kMaxItemsPerBin) {
std::stringstream ss; std::stringstream ss;
ss << "In RasterizeCoarseCuda got num_bins_y: " << num_bins_y ss << "In RasterizeCoarseCuda got num_bins_y: " << num_bins_y
<< ", num_bins_x: " << num_bins_x << ", " << ", num_bins_x: " << num_bins_x << ", " << "; that's too many!";
<< "; that's too many!";
AT_ERROR(ss.str()); AT_ERROR(ss.str());
} }
auto opts = elems_per_batch.options().dtype(at::kInt); auto opts = elems_per_batch.options().dtype(at::kInt);

View File

@@ -144,7 +144,7 @@ __device__ void CheckPixelInsideFace(
const bool zero_face_area = const bool zero_face_area =
(face_area <= kEpsilon && face_area >= -1.0f * kEpsilon); (face_area <= kEpsilon && face_area >= -1.0f * kEpsilon);
if (zmax < 0 || cull_backfaces && back_face || outside_bbox || if (zmax < 0 || (cull_backfaces && back_face) || outside_bbox ||
zero_face_area) { zero_face_area) {
return; return;
} }

View File

@@ -18,6 +18,8 @@ const auto vEpsilon = 1e-8;
// Common functions and operators for float2. // Common functions and operators for float2.
// Complex arithmetic is already defined for AMD.
#if !defined(USE_ROCM)
__device__ inline float2 operator-(const float2& a, const float2& b) { __device__ inline float2 operator-(const float2& a, const float2& b) {
return make_float2(a.x - b.x, a.y - b.y); return make_float2(a.x - b.x, a.y - b.y);
} }
@@ -41,6 +43,7 @@ __device__ inline float2 operator*(const float2& a, const float2& b) {
__device__ inline float2 operator*(const float a, const float2& b) { __device__ inline float2 operator*(const float a, const float2& b) {
return make_float2(a * b.x, a * b.y); return make_float2(a * b.x, a * b.y);
} }
#endif
__device__ inline float FloatMin3(const float a, const float b, const float c) { __device__ inline float FloatMin3(const float a, const float b, const float c) {
return fminf(a, fminf(b, c)); return fminf(a, fminf(b, c));

View File

@@ -23,37 +23,51 @@ WarpReduceMin(scalar_t* min_dists, int64_t* min_idxs, const size_t tid) {
min_idxs[tid] = min_idxs[tid + 32]; min_idxs[tid] = min_idxs[tid + 32];
min_dists[tid] = min_dists[tid + 32]; min_dists[tid] = min_dists[tid + 32];
} }
// AMD does not use explicit syncwarp and instead automatically inserts memory
// fences during compilation.
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
// s = 16 // s = 16
if (min_dists[tid] > min_dists[tid + 16]) { if (min_dists[tid] > min_dists[tid + 16]) {
min_idxs[tid] = min_idxs[tid + 16]; min_idxs[tid] = min_idxs[tid + 16];
min_dists[tid] = min_dists[tid + 16]; min_dists[tid] = min_dists[tid + 16];
} }
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
// s = 8 // s = 8
if (min_dists[tid] > min_dists[tid + 8]) { if (min_dists[tid] > min_dists[tid + 8]) {
min_idxs[tid] = min_idxs[tid + 8]; min_idxs[tid] = min_idxs[tid + 8];
min_dists[tid] = min_dists[tid + 8]; min_dists[tid] = min_dists[tid + 8];
} }
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
// s = 4 // s = 4
if (min_dists[tid] > min_dists[tid + 4]) { if (min_dists[tid] > min_dists[tid + 4]) {
min_idxs[tid] = min_idxs[tid + 4]; min_idxs[tid] = min_idxs[tid + 4];
min_dists[tid] = min_dists[tid + 4]; min_dists[tid] = min_dists[tid + 4];
} }
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
// s = 2 // s = 2
if (min_dists[tid] > min_dists[tid + 2]) { if (min_dists[tid] > min_dists[tid + 2]) {
min_idxs[tid] = min_idxs[tid + 2]; min_idxs[tid] = min_idxs[tid + 2];
min_dists[tid] = min_dists[tid + 2]; min_dists[tid] = min_dists[tid + 2];
} }
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
// s = 1 // s = 1
if (min_dists[tid] > min_dists[tid + 1]) { if (min_dists[tid] > min_dists[tid + 1]) {
min_idxs[tid] = min_idxs[tid + 1]; min_idxs[tid] = min_idxs[tid + 1];
min_dists[tid] = min_dists[tid + 1]; min_dists[tid] = min_dists[tid + 1];
} }
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
} }
template <typename scalar_t> template <typename scalar_t>
@@ -65,30 +79,42 @@ __device__ void WarpReduceMax(
dists[tid] = dists[tid + 32]; dists[tid] = dists[tid + 32];
dists_idx[tid] = dists_idx[tid + 32]; dists_idx[tid] = dists_idx[tid + 32];
} }
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
if (dists[tid] < dists[tid + 16]) { if (dists[tid] < dists[tid + 16]) {
dists[tid] = dists[tid + 16]; dists[tid] = dists[tid + 16];
dists_idx[tid] = dists_idx[tid + 16]; dists_idx[tid] = dists_idx[tid + 16];
} }
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
if (dists[tid] < dists[tid + 8]) { if (dists[tid] < dists[tid + 8]) {
dists[tid] = dists[tid + 8]; dists[tid] = dists[tid + 8];
dists_idx[tid] = dists_idx[tid + 8]; dists_idx[tid] = dists_idx[tid + 8];
} }
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
if (dists[tid] < dists[tid + 4]) { if (dists[tid] < dists[tid + 4]) {
dists[tid] = dists[tid + 4]; dists[tid] = dists[tid + 4];
dists_idx[tid] = dists_idx[tid + 4]; dists_idx[tid] = dists_idx[tid + 4];
} }
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
if (dists[tid] < dists[tid + 2]) { if (dists[tid] < dists[tid + 2]) {
dists[tid] = dists[tid + 2]; dists[tid] = dists[tid + 2];
dists_idx[tid] = dists_idx[tid + 2]; dists_idx[tid] = dists_idx[tid + 2];
} }
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
if (dists[tid] < dists[tid + 1]) { if (dists[tid] < dists[tid + 1]) {
dists[tid] = dists[tid + 1]; dists[tid] = dists[tid + 1];
dists_idx[tid] = dists_idx[tid + 1]; dists_idx[tid] = dists_idx[tid + 1];
} }
#if !defined(USE_ROCM)
__syncwarp(); __syncwarp();
#endif
} }

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
from .r2n2 import BlenderCamera, collate_batched_R2N2, R2N2, render_cubified_voxels from .r2n2 import BlenderCamera, collate_batched_R2N2, R2N2, render_cubified_voxels
from .shapenet import ShapeNetCore from .shapenet import ShapeNetCore
from .utils import collate_batched_meshes from .utils import collate_batched_meshes

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
from .r2n2 import R2N2 from .r2n2 import R2N2
from .utils import BlenderCamera, collate_batched_R2N2, render_cubified_voxels from .utils import BlenderCamera, collate_batched_R2N2, render_cubified_voxels

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import json import json
import warnings import warnings
from os import path from os import path

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import math import math
from typing import Dict, List from typing import Dict, List

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
from .shapenet_core import ShapeNetCore from .shapenet_core import ShapeNetCore

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import json import json
import os import os
import warnings import warnings

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import warnings import warnings
from typing import Dict, List, Optional, Tuple from typing import Dict, List, Optional, Tuple

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
from typing import Dict, List from typing import Dict, List
from pytorch3d.renderer.mesh import TexturesAtlas from pytorch3d.renderer.mesh import TexturesAtlas

View File

@@ -3,3 +3,5 @@
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe

View File

@@ -3,3 +3,5 @@
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import torch import torch
from pytorch3d.implicitron.tools.config import registry from pytorch3d.implicitron.tools.config import registry

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
from dataclasses import dataclass from dataclasses import dataclass
from enum import Enum from enum import Enum
from typing import Iterator, List, Optional, Tuple from typing import Iterator, List, Optional, Tuple

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
from typing import Optional, Tuple from typing import Optional, Tuple
from pytorch3d.implicitron.tools.config import ( from pytorch3d.implicitron.tools.config import (
@@ -39,7 +41,7 @@ class DataSourceBase(ReplaceableBase):
@registry.register @registry.register
class ImplicitronDataSource(DataSourceBase): # pyre-ignore[13] class ImplicitronDataSource(DataSourceBase):
""" """
Represents the data used in Implicitron. This is the only implementation Represents the data used in Implicitron. This is the only implementation
of DataSourceBase provided. of DataSourceBase provided.
@@ -50,8 +52,11 @@ class ImplicitronDataSource(DataSourceBase): # pyre-ignore[13]
data_loader_map_provider_class_type: identifies type for data_loader_map_provider. data_loader_map_provider_class_type: identifies type for data_loader_map_provider.
""" """
# pyre-fixme[13]: Attribute `dataset_map_provider` is never initialized.
dataset_map_provider: DatasetMapProviderBase dataset_map_provider: DatasetMapProviderBase
# pyre-fixme[13]: Attribute `dataset_map_provider_class_type` is never initialized.
dataset_map_provider_class_type: str dataset_map_provider_class_type: str
# pyre-fixme[13]: Attribute `data_loader_map_provider` is never initialized.
data_loader_map_provider: DataLoaderMapProviderBase data_loader_map_provider: DataLoaderMapProviderBase
data_loader_map_provider_class_type: str = "SequenceDataLoaderMapProvider" data_loader_map_provider_class_type: str = "SequenceDataLoaderMapProvider"

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
from collections import defaultdict from collections import defaultdict
from dataclasses import dataclass from dataclasses import dataclass
from typing import ( from typing import (

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import logging import logging
import os import os
from dataclasses import dataclass from dataclasses import dataclass

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import os import os
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from collections import defaultdict from collections import defaultdict
@@ -274,6 +276,7 @@ class FrameData(Mapping[str, Any]):
image_size_hw=tuple(self.effective_image_size_hw), # pyre-ignore image_size_hw=tuple(self.effective_image_size_hw), # pyre-ignore
) )
crop_bbox_xywh = bbox_xyxy_to_xywh(clamp_bbox_xyxy) crop_bbox_xywh = bbox_xyxy_to_xywh(clamp_bbox_xyxy)
self.crop_bbox_xywh = crop_bbox_xywh
if self.fg_probability is not None: if self.fg_probability is not None:
self.fg_probability = crop_around_box( self.fg_probability = crop_around_box(
@@ -432,7 +435,7 @@ class FrameData(Mapping[str, Any]):
# TODO: don't store K; enforce working in NDC space # TODO: don't store K; enforce working in NDC space
return join_cameras_as_batch(batch) return join_cameras_as_batch(batch)
else: else:
return torch.utils.data._utils.collate.default_collate(batch) return torch.utils.data.dataloader.default_collate(batch)
FrameDataSubtype = TypeVar("FrameDataSubtype", bound=FrameData) FrameDataSubtype = TypeVar("FrameDataSubtype", bound=FrameData)
@@ -576,11 +579,11 @@ class GenericFrameDataBuilder(FrameDataBuilderBase[FrameDataSubtype], ABC):
camera_quality_score=safe_as_tensor( camera_quality_score=safe_as_tensor(
sequence_annotation.viewpoint_quality_score, torch.float sequence_annotation.viewpoint_quality_score, torch.float
), ),
point_cloud_quality_score=safe_as_tensor( point_cloud_quality_score=(
point_cloud.quality_score, torch.float safe_as_tensor(point_cloud.quality_score, torch.float)
) if point_cloud is not None
if point_cloud is not None else None
else None, ),
) )
fg_mask_np: Optional[np.ndarray] = None fg_mask_np: Optional[np.ndarray] = None

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import copy import copy
import functools import functools
import gzip import gzip
@@ -124,9 +126,9 @@ class JsonIndexDataset(DatasetBase, ReplaceableBase):
dimension of the cropping bounding box, relative to box size. dimension of the cropping bounding box, relative to box size.
""" """
frame_annotations_type: ClassVar[ frame_annotations_type: ClassVar[Type[types.FrameAnnotation]] = (
Type[types.FrameAnnotation] types.FrameAnnotation
] = types.FrameAnnotation )
path_manager: Any = None path_manager: Any = None
frame_annotations_file: str = "" frame_annotations_file: str = ""

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import json import json
import os import os
@@ -64,7 +66,7 @@ _NEED_CONTROL: Tuple[str, ...] = (
@registry.register @registry.register
class JsonIndexDatasetMapProvider(DatasetMapProviderBase): # pyre-ignore [13] class JsonIndexDatasetMapProvider(DatasetMapProviderBase):
""" """
Generates the training / validation and testing dataset objects for Generates the training / validation and testing dataset objects for
a dataset laid out on disk like Co3D, with annotations in json files. a dataset laid out on disk like Co3D, with annotations in json files.
@@ -93,6 +95,7 @@ class JsonIndexDatasetMapProvider(DatasetMapProviderBase): # pyre-ignore [13]
path_manager_factory_class_type: The class type of `path_manager_factory`. path_manager_factory_class_type: The class type of `path_manager_factory`.
""" """
# pyre-fixme[13]: Attribute `category` is never initialized.
category: str category: str
task_str: str = "singlesequence" task_str: str = "singlesequence"
dataset_root: str = _CO3D_DATASET_ROOT dataset_root: str = _CO3D_DATASET_ROOT
@@ -102,8 +105,10 @@ class JsonIndexDatasetMapProvider(DatasetMapProviderBase): # pyre-ignore [13]
test_restrict_sequence_id: int = -1 test_restrict_sequence_id: int = -1
assert_single_seq: bool = False assert_single_seq: bool = False
only_test_set: bool = False only_test_set: bool = False
# pyre-fixme[13]: Attribute `dataset` is never initialized.
dataset: JsonIndexDataset dataset: JsonIndexDataset
dataset_class_type: str = "JsonIndexDataset" dataset_class_type: str = "JsonIndexDataset"
# pyre-fixme[13]: Attribute `path_manager_factory` is never initialized.
path_manager_factory: PathManagerFactory path_manager_factory: PathManagerFactory
path_manager_factory_class_type: str = "PathManagerFactory" path_manager_factory_class_type: str = "PathManagerFactory"

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import copy import copy
import json import json
@@ -54,7 +56,7 @@ logger = logging.getLogger(__name__)
@registry.register @registry.register
class JsonIndexDatasetMapProviderV2(DatasetMapProviderBase): # pyre-ignore [13] class JsonIndexDatasetMapProviderV2(DatasetMapProviderBase):
""" """
Generates the training, validation, and testing dataset objects for Generates the training, validation, and testing dataset objects for
a dataset laid out on disk like CO3Dv2, with annotations in gzipped json files. a dataset laid out on disk like CO3Dv2, with annotations in gzipped json files.
@@ -169,7 +171,9 @@ class JsonIndexDatasetMapProviderV2(DatasetMapProviderBase): # pyre-ignore [13]
path_manager_factory_class_type: The class type of `path_manager_factory`. path_manager_factory_class_type: The class type of `path_manager_factory`.
""" """
# pyre-fixme[13]: Attribute `category` is never initialized.
category: str category: str
# pyre-fixme[13]: Attribute `subset_name` is never initialized.
subset_name: str subset_name: str
dataset_root: str = _CO3DV2_DATASET_ROOT dataset_root: str = _CO3DV2_DATASET_ROOT
@@ -181,8 +185,10 @@ class JsonIndexDatasetMapProviderV2(DatasetMapProviderBase): # pyre-ignore [13]
n_known_frames_for_test: int = 0 n_known_frames_for_test: int = 0
dataset_class_type: str = "JsonIndexDataset" dataset_class_type: str = "JsonIndexDataset"
# pyre-fixme[13]: Attribute `dataset` is never initialized.
dataset: JsonIndexDataset dataset: JsonIndexDataset
# pyre-fixme[13]: Attribute `path_manager_factory` is never initialized.
path_manager_factory: PathManagerFactory path_manager_factory: PathManagerFactory
path_manager_factory_class_type: str = "PathManagerFactory" path_manager_factory_class_type: str = "PathManagerFactory"

View File

@@ -4,6 +4,8 @@
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
# pyre-unsafe
import numpy as np import numpy as np
import torch import torch

View File

@@ -1,6 +1,8 @@
# @lint-ignore-every LICENSELINT # @lint-ignore-every LICENSELINT
# Adapted from https://github.com/bmild/nerf/blob/master/load_blender.py # Adapted from https://github.com/bmild/nerf/blob/master/load_blender.py
# Copyright (c) 2020 bmild # Copyright (c) 2020 bmild
# pyre-unsafe
import json import json
import os import os

View File

@@ -1,6 +1,8 @@
# @lint-ignore-every LICENSELINT # @lint-ignore-every LICENSELINT
# Adapted from https://github.com/bmild/nerf/blob/master/load_llff.py # Adapted from https://github.com/bmild/nerf/blob/master/load_llff.py
# Copyright (c) 2020 bmild # Copyright (c) 2020 bmild
# pyre-unsafe
import logging import logging
import os import os
import warnings import warnings
@@ -34,11 +36,7 @@ def _minify(basedir, path_manager, factors=(), resolutions=()):
imgdir = os.path.join(basedir, "images") imgdir = os.path.join(basedir, "images")
imgs = [os.path.join(imgdir, f) for f in sorted(_ls(path_manager, imgdir))] imgs = [os.path.join(imgdir, f) for f in sorted(_ls(path_manager, imgdir))]
imgs = [ imgs = [f for f in imgs if f.endswith("JPG", "jpg", "png", "jpeg", "PNG")]
f
for f in imgs
if any([f.endswith(ex) for ex in ["JPG", "jpg", "png", "jpeg", "PNG"]])
]
imgdir_orig = imgdir imgdir_orig = imgdir
wd = os.getcwd() wd = os.getcwd()

Some files were not shown because too many files have changed in this diff Show More