19 Commits

Author SHA1 Message Date
bottler
9c586b1351 Run tests in github action not circleci (#1896)
Summary: Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1896

Differential Revision: D65272512

Pulled By: bottler
2024-10-31 08:41:20 -07:00
Richard Barnes
e13848265d at::optional -> std::optional (#1170)
Summary: Pull Request resolved: https://github.com/pytorch/ao/pull/1170

Reviewed By: gineshidalgo99

Differential Revision: D64938040

fbshipit-source-id: 57f98b90676ad0164a6975ea50e4414fd85ae6c4
2024-10-25 06:37:57 -07:00
generatedunixname89002005307016
58566963d6 Add type error suppressions for upcoming upgrade
Reviewed By: MaggieMoss

Differential Revision: D64502797

fbshipit-source-id: cee9a54dfa8a005d5912b895d0bd094f352c5c6f
2024-10-16 19:22:01 -07:00
Suresh Babu Kolla
e17ed5cd50 Hipify Pulsar for PyTorch3D
Summary:
- Hipified Pytorch Pulsar
   - Created separate target for Pulsar tests and enabled RE testing
   - Pytorch3D full test suite requires additional work like fixing EGL
     dependencies on AMD

Reviewed By: danzimm

Differential Revision: D61339912

fbshipit-source-id: 0d10bc966e4de4a959f3834a386bad24e449dc1f
2024-10-09 14:38:42 -07:00
Richard Barnes
8ed0c7a002 c10::optional -> std::optional
Summary: `c10::optional` is an alias for `std::optional`. Let's remove the alias and use the real thing.

Reviewed By: meyering

Differential Revision: D63402341

fbshipit-source-id: 241383e7ca4b2f3f1f9cac3af083056123dfd02b
2024-10-03 14:38:37 -07:00
Richard Barnes
2da913c7e6 c10::optional -> std::optional
Summary: `c10::optional` is an alias for `std::optional`. Let's remove the alias and use the real thing.

Reviewed By: palmje

Differential Revision: D63409387

fbshipit-source-id: fb6db59a14db9e897e2e6b6ad378f33bf2af86e8
2024-10-02 11:09:29 -07:00
generatedunixname89002005307016
fca83e6369 Convert .pyre_configuration.local to fast by default architecture] [batch:23/263] [shard:3/N] [A]
Reviewed By: connernilsen

Differential Revision: D63415925

fbshipit-source-id: c3e28405c70f9edcf8c21457ac4faf7315b07322
2024-09-25 17:34:03 -07:00
Jeremy Reizenstein
75ebeeaea0 update version to 0.7.8
Summary: as title

Reviewed By: das-intensity

Differential Revision: D62588556

fbshipit-source-id: 55bae19dd1df796e83179cd29d805fcd871b6d23
2024-09-13 02:31:49 -07:00
Jeremy Reizenstein
ab793177c6 remove pytorch2.0 builds
Summary: these are failing in ci

Reviewed By: das-intensity

Differential Revision: D62594666

fbshipit-source-id: 5e3a7441be2978803dc2d3e361365e0fffa7ad3b
2024-09-13 02:07:25 -07:00
Jeremy Reizenstein
9acdd67b83 fix obj material indexing bug #1368
Summary:
Make the negative index actually not an error

fixes https://github.com/facebookresearch/pytorch3d/issues/1368

Reviewed By: das-intensity

Differential Revision: D62177991

fbshipit-source-id: e5ed433bde1f54251c4d4b6db073c029cbe87343
2024-09-13 02:00:49 -07:00
Nicholas Dahm
3f428d9981 pytorch 2.4.0 + 2.4.1
Summary:
Apparently pytorch 2.4 is now supported as per [this closed issue](https://github.com/facebookresearch/pytorch3d/issues/1863).

Added the `2.4.0` & `2.4.1` versions to `regenerate.py` then ran that as per the `README_fb.md` which generated `config.yml` changes.

Reviewed By: bottler

Differential Revision: D62517831

fbshipit-source-id: 002e276dfe2fa078136ff2f6c747d937abbadd1a
2024-09-11 15:09:43 -07:00
Josh Fromm
05cbea115a Hipify Pytorch3D (#1851)
Summary:
X-link: https://github.com/pytorch/pytorch/pull/133343

X-link: https://github.com/fairinternal/pytorch3d/pull/45

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1851

Enables pytorch3d to build on AMD. An important part of enabling this was not compiling the Pulsar backend when the target is AMD. There are simply too many kernel incompatibilites to make it work (I tried haha). Fortunately, it doesnt seem like most modern applications of pytorch3d rely on Pulsar. We should be able to unlock most of pytorch3d's goodness on AMD without it.

Reviewed By: bottler, houseroad

Differential Revision: D61171993

fbshipit-source-id: fd4aee378a3568b22676c5bf2b727c135ff710af
2024-08-15 16:18:22 -07:00
generatedunixname89002005307016
38afdcfc68 upgrade pyre version in fbcode/vision - batch 2
Reviewed By: bottler

Differential Revision: D60992234

fbshipit-source-id: 899db6ed590ef966ff651c11027819e59b8401a3
2024-08-09 02:07:45 -07:00
Christine Sun
1e0b1d9c72 Remove Python versions from Install.md
Summary: To avoid the installation instructions for PyTorch3D becoming out-of-date, instead of specifying certain Python versions, update to just `Python`. Reader will understand it has to be a Python version compatible with GitHub.

Reviewed By: bottler

Differential Revision: D60919848

fbshipit-source-id: 5e974970a0db3d3d32fae44e5dd30cbc1ce237a9
2024-08-07 13:46:31 -07:00
Rebecca Chen (Python)
44702fdb4b Add "max" point reduction for chamfer distance
Summary:
* Adds a "max" option for the point_reduction input to the
  chamfer_distance function.
* When combining the x and y directions, maxes the losses instead
  of summing them when point_reduction="max".
* Moves batch reduction to happen after the directions are
  combined.
* Adds test_chamfer_point_reduction_max and
  test_single_directional_chamfer_point_reduction_max tests.

Fixes  https://github.com/facebookresearch/pytorch3d/issues/1838

Reviewed By: bottler

Differential Revision: D60614661

fbshipit-source-id: 7879816acfda03e945bada951b931d2c522756eb
2024-08-02 10:46:07 -07:00
Jeremy Reizenstein
7edaee71a9 allow matrix_to_quaternion onnx export
Summary: Attempt to allow torch.onnx.dynamo_export(matrix_to_quaternion) to work.

Differential Revision: D59812279

fbshipit-source-id: 4497e5b543bec9d5c2bdccfb779d154750a075ad
2024-07-16 11:30:20 -07:00
Roman Shapovalov
d0d0e02007 Fix: setting FrameData.crop_bbox_xywh for backwards compatibility
Summary: This diff is fixing a backwards compatibility issue in PyTorch3D's dataset API. The code ensures that the `crop_bbox_xywh` attribute is set when box_crop flag is on. This is an implementation detail that people should not really use, however some people depend on this behaviour.

Reviewed By: bottler

Differential Revision: D59777449

fbshipit-source-id: b875e9eb909038b8629ccdade87661bb2c39d529
2024-07-16 02:21:13 -07:00
Jeremy Reizenstein
4df110b0a9 remove fvcore dependency
Summary: This is not actually needed and is causing a conda-forge confusion to do with python_abi - which needs users to have `-c conda-forge` when they install pytorch3d.

Reviewed By: patricklabatut

Differential Revision: D59587930

fbshipit-source-id: 961ae13a62e1b2b2ce6d8781db38bd97eca69e65
2024-07-11 04:35:38 -07:00
Huy Do
51fd114d8b Forward fix internal pyre failure from D58983461
Summary:
X-link: https://github.com/pytorch/pytorch/pull/129525

Somehow, using underscore alias of some builtin types breaks pyre

Reviewed By: malfet, clee2000

Differential Revision: D59029768

fbshipit-source-id: cfa2171b66475727b9545355e57a8297c1dc0bc6
2024-06-27 07:35:18 -07:00
90 changed files with 539 additions and 220 deletions

View File

@@ -162,34 +162,6 @@ workflows:
jobs:
# - main:
# context: DOCKERHUB_TOKEN
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py38_cu117_pyt200
python_version: '3.8'
pytorch_version: 2.0.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py38_cu118_pyt200
python_version: '3.8'
pytorch_version: 2.0.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py38_cu117_pyt201
python_version: '3.8'
pytorch_version: 2.0.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py38_cu118_pyt201
python_version: '3.8'
pytorch_version: 2.0.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
@@ -275,33 +247,33 @@ workflows:
python_version: '3.8'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py39_cu117_pyt200
python_version: '3.9'
pytorch_version: 2.0.0
cu_version: cu118
name: linux_conda_py38_cu118_pyt240
python_version: '3.8'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py38_cu121_pyt240
python_version: '3.8'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py39_cu118_pyt200
python_version: '3.9'
pytorch_version: 2.0.0
name: linux_conda_py38_cu118_pyt241
python_version: '3.8'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py39_cu117_pyt201
python_version: '3.9'
pytorch_version: 2.0.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py39_cu118_pyt201
python_version: '3.9'
pytorch_version: 2.0.1
cu_version: cu121
name: linux_conda_py38_cu121_pyt241
python_version: '3.8'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
@@ -387,33 +359,33 @@ workflows:
python_version: '3.9'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py310_cu117_pyt200
python_version: '3.10'
pytorch_version: 2.0.0
cu_version: cu118
name: linux_conda_py39_cu118_pyt240
python_version: '3.9'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py39_cu121_pyt240
python_version: '3.9'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py310_cu118_pyt200
python_version: '3.10'
pytorch_version: 2.0.0
name: linux_conda_py39_cu118_pyt241
python_version: '3.9'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda117
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu117
name: linux_conda_py310_cu117_pyt201
python_version: '3.10'
pytorch_version: 2.0.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py310_cu118_pyt201
python_version: '3.10'
pytorch_version: 2.0.1
cu_version: cu121
name: linux_conda_py39_cu121_pyt241
python_version: '3.9'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
@@ -498,6 +470,34 @@ workflows:
name: linux_conda_py310_cu121_pyt231
python_version: '3.10'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py310_cu118_pyt240
python_version: '3.10'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py310_cu121_pyt240
python_version: '3.10'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py310_cu118_pyt241
python_version: '3.10'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py310_cu121_pyt241
python_version: '3.10'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
@@ -582,6 +582,34 @@ workflows:
name: linux_conda_py311_cu121_pyt231
python_version: '3.11'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py311_cu118_pyt240
python_version: '3.11'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py311_cu121_pyt240
python_version: '3.11'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py311_cu118_pyt241
python_version: '3.11'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py311_cu121_pyt241
python_version: '3.11'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
@@ -624,6 +652,34 @@ workflows:
name: linux_conda_py312_cu121_pyt231
python_version: '3.12'
pytorch_version: 2.3.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py312_cu118_pyt240
python_version: '3.12'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py312_cu121_pyt240
python_version: '3.12'
pytorch_version: 2.4.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda118
context: DOCKERHUB_TOKEN
cu_version: cu118
name: linux_conda_py312_cu118_pyt241
python_version: '3.12'
pytorch_version: 2.4.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda121
context: DOCKERHUB_TOKEN
cu_version: cu121
name: linux_conda_py312_cu121_pyt241
python_version: '3.12'
pytorch_version: 2.4.1
- binary_linux_conda_cuda:
name: testrun_conda_cuda_py310_cu117_pyt201
context: DOCKERHUB_TOKEN

View File

@@ -19,14 +19,14 @@ from packaging import version
# The CUDA versions which have pytorch conda packages available for linux for each
# version of pytorch.
CONDA_CUDA_VERSIONS = {
"2.0.0": ["cu117", "cu118"],
"2.0.1": ["cu117", "cu118"],
"2.1.0": ["cu118", "cu121"],
"2.1.1": ["cu118", "cu121"],
"2.1.2": ["cu118", "cu121"],
"2.2.0": ["cu118", "cu121"],
"2.2.2": ["cu118", "cu121"],
"2.3.1": ["cu118", "cu121"],
"2.4.0": ["cu118", "cu121"],
"2.4.1": ["cu118", "cu121"],
}

20
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: facebookresearch/pytorch3d/build_and_test
on:
pull_request:
branches:
- main
jobs:
binary_linux_conda_cuda:
runs-on: 4-core-ubuntu-gpu-t4
env:
PYTHON_VERSION: "3.12"
BUILD_VERSION: "${{ github.run_number }}"
PYTORCH_VERSION: "2.4.1"
CU_VERSION: "cu121"
JUST_TESTRUN: 1
steps:
- uses: actions/checkout@v4
- name: Build and run tests
run: |-
conda create --name env --yes --quiet conda-build
conda run --no-capture-output --name env python3 ./packaging/build_conda.py --use-conda-cuda

View File

@@ -8,11 +8,10 @@
The core library is written in PyTorch. Several components have underlying implementation in CUDA for improved performance. A subset of these components have CPU implementations in C++/PyTorch. It is advised to use PyTorch3D with GPU support in order to use all the features.
- Linux or macOS or Windows
- Python 3.8, 3.9 or 3.10
- PyTorch 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0 or 2.3.1.
- Python
- PyTorch 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0 or 2.4.1.
- torchvision that matches the PyTorch installation. You can install them together as explained at pytorch.org to make sure of this.
- gcc & g++ ≥ 4.9
- [fvcore](https://github.com/facebookresearch/fvcore)
- [ioPath](https://github.com/facebookresearch/iopath)
- If CUDA is to be used, use a version which is supported by the corresponding pytorch version and at least version 9.2.
- If CUDA older than 11.7 is to be used and you are building from source, the CUB library must be available. We recommend version 1.10.0.
@@ -22,7 +21,7 @@ The runtime dependencies can be installed by running:
conda create -n pytorch3d python=3.9
conda activate pytorch3d
conda install pytorch=1.13.0 torchvision pytorch-cuda=11.6 -c pytorch -c nvidia
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c iopath iopath
```
For the CUB build time dependency, which you only need if you have CUDA older than 11.7, if you are using conda, you can continue with
@@ -49,6 +48,7 @@ For developing on top of PyTorch3D or contributing, you will need to run the lin
- tdqm
- jupyter
- imageio
- fvcore
- plotly
- opencv-python
@@ -59,6 +59,7 @@ conda install jupyter
pip install scikit-image matplotlib imageio plotly opencv-python
# Tests/Linting
conda install -c fvcore -c conda-forge fvcore
pip install black usort flake8 flake8-bugbear flake8-comprehensions
```
@@ -97,7 +98,7 @@ version_str="".join([
torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}"
])
!pip install fvcore iopath
!pip install iopath
!pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
```

View File

@@ -23,7 +23,7 @@ conda init bash
source ~/.bashrc
conda create -y -n myenv python=3.8 matplotlib ipython ipywidgets nbconvert
conda activate myenv
conda install -y -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -y -c iopath iopath
conda install -y -c pytorch pytorch=1.6.0 cudatoolkit=10.1 torchvision
conda install -y -c pytorch3d-nightly pytorch3d
pip install plotly scikit-image

View File

@@ -5,7 +5,6 @@ sphinx_rtd_theme
sphinx_markdown_tables
numpy
iopath
fvcore
https://download.pytorch.org/whl/cpu/torchvision-0.15.2%2Bcpu-cp311-cp311-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torch-2.0.1%2Bcpu-cp311-cp311-linux_x86_64.whl
omegaconf

View File

@@ -96,7 +96,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -83,7 +83,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -58,7 +58,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -97,7 +97,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -63,7 +63,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -75,7 +75,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -54,7 +54,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -85,7 +85,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -79,7 +79,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -57,7 +57,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -64,7 +64,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -80,7 +80,7 @@
" torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{pyt_version_str}\"\n",
" ])\n",
" !pip install fvcore iopath\n",
" !pip install iopath\n",
" if sys.platform.startswith(\"linux\"):\n",
" print(\"Trying to install wheel for PyTorch3D\")\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",

View File

@@ -4,10 +4,11 @@
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os.path
import runpy
import subprocess
from typing import List
from typing import List, Tuple
# required env vars:
# CU_VERSION: E.g. cu112
@@ -23,7 +24,7 @@ pytorch_major_minor = tuple(int(i) for i in PYTORCH_VERSION.split(".")[:2])
source_root_dir = os.environ["PWD"]
def version_constraint(version):
def version_constraint(version) -> str:
"""
Given version "11.3" returns " >=11.3,<11.4"
"""
@@ -32,7 +33,7 @@ def version_constraint(version):
return f" >={version},<{upper}"
def get_cuda_major_minor():
def get_cuda_major_minor() -> Tuple[str, str]:
if CU_VERSION == "cpu":
raise ValueError("fn only for cuda builds")
if len(CU_VERSION) != 5 or CU_VERSION[:2] != "cu":
@@ -42,11 +43,10 @@ def get_cuda_major_minor():
return major, minor
def setup_cuda():
def setup_cuda(use_conda_cuda: bool) -> List[str]:
if CU_VERSION == "cpu":
return
return []
major, minor = get_cuda_major_minor()
os.environ["CUDA_HOME"] = f"/usr/local/cuda-{major}.{minor}/"
os.environ["FORCE_CUDA"] = "1"
basic_nvcc_flags = (
@@ -75,6 +75,15 @@ def setup_cuda():
if os.environ.get("JUST_TESTRUN", "0") != "1":
os.environ["NVCC_FLAGS"] = nvcc_flags
if use_conda_cuda:
os.environ["CONDA_CUDA_TOOLKIT_BUILD_CONSTRAINT1"] = "- cuda-toolkit"
os.environ["CONDA_CUDA_TOOLKIT_BUILD_CONSTRAINT2"] = (
f"- cuda-version={major}.{minor}"
)
return ["-c", f"nvidia/label/cuda-{major}.{minor}.0"]
else:
os.environ["CUDA_HOME"] = f"/usr/local/cuda-{major}.{minor}/"
return []
def setup_conda_pytorch_constraint() -> List[str]:
@@ -95,7 +104,7 @@ def setup_conda_pytorch_constraint() -> List[str]:
return ["-c", "pytorch", "-c", "nvidia"]
def setup_conda_cudatoolkit_constraint():
def setup_conda_cudatoolkit_constraint() -> None:
if CU_VERSION == "cpu":
os.environ["CONDA_CPUONLY_FEATURE"] = "- cpuonly"
os.environ["CONDA_CUDATOOLKIT_CONSTRAINT"] = ""
@@ -116,14 +125,14 @@ def setup_conda_cudatoolkit_constraint():
os.environ["CONDA_CUDATOOLKIT_CONSTRAINT"] = toolkit
def do_build(start_args: List[str]):
def do_build(start_args: List[str]) -> None:
args = start_args.copy()
test_flag = os.environ.get("TEST_FLAG")
if test_flag is not None:
args.append(test_flag)
args.extend(["-c", "bottler", "-c", "fvcore", "-c", "iopath", "-c", "conda-forge"])
args.extend(["-c", "bottler", "-c", "iopath", "-c", "conda-forge"])
args.append("--no-anaconda-upload")
args.extend(["--python", os.environ["PYTHON_VERSION"]])
args.append("packaging/pytorch3d")
@@ -132,8 +141,16 @@ def do_build(start_args: List[str]):
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Build the conda package.")
parser.add_argument(
"--use-conda-cuda",
action="store_true",
help="get cuda from conda ignoring local cuda",
)
our_args = parser.parse_args()
args = ["conda", "build"]
setup_cuda()
args += setup_cuda(use_conda_cuda=our_args.use_conda_cuda)
init_path = source_root_dir + "/pytorch3d/__init__.py"
build_version = runpy.run_path(init_path)["__version__"]

View File

@@ -26,6 +26,6 @@ version_str="".join([
torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}"
])
!pip install fvcore iopath
!pip install iopath
!pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
```

View File

@@ -144,7 +144,7 @@ do
conda activate "$tag"
# shellcheck disable=SC2086
conda install -y -c pytorch $extra_channel "pytorch=$pytorch_version" "$cudatools=$CUDA_TAG"
pip install fvcore iopath
pip install iopath
echo "python version" "$python_version" "pytorch version" "$pytorch_version" "cuda version" "$cu_version" "tag" "$tag"
rm -rf dist

View File

@@ -8,10 +8,13 @@ source:
requirements:
build:
- {{ compiler('c') }} # [win]
{{ environ.get('CONDA_CUDA_TOOLKIT_BUILD_CONSTRAINT1', '') }}
{{ environ.get('CONDA_CUDA_TOOLKIT_BUILD_CONSTRAINT2', '') }}
{{ environ.get('CONDA_CUB_CONSTRAINT') }}
host:
- python
- mkl =2023 # [x86_64]
{{ environ.get('SETUPTOOLS_CONSTRAINT') }}
{{ environ.get('CONDA_PYTORCH_BUILD_CONSTRAINT') }}
{{ environ.get('CONDA_PYTORCH_MKL_CONSTRAINT') }}
@@ -22,13 +25,14 @@ requirements:
- python
- numpy >=1.11
- torchvision >=0.5
- fvcore
- mkl =2023 # [x86_64]
- iopath
{{ environ.get('CONDA_PYTORCH_CONSTRAINT') }}
{{ environ.get('CONDA_CUDATOOLKIT_CONSTRAINT') }}
build:
string: py{{py}}_{{ environ['CU_VERSION'] }}_pyt{{ environ['PYTORCH_VERSION_NODOT']}}
# script: LD_LIBRARY_PATH=$PREFIX/lib:$BUILD_PREFIX/lib:$LD_LIBRARY_PATH python setup.py install --single-version-externally-managed --record=record.txt # [not win]
script: python setup.py install --single-version-externally-managed --record=record.txt # [not win]
script_env:
- CUDA_HOME
@@ -48,6 +52,10 @@ test:
- imageio
- hydra-core
- accelerate
- matplotlib
- tabulate
- pandas
- sqlalchemy
commands:
#pytest .
python -m unittest discover -v -s tests -t .

View File

@@ -99,7 +99,7 @@ except ModuleNotFoundError:
no_accelerate = os.environ.get("PYTORCH3D_NO_ACCELERATE") is not None
class Experiment(Configurable): # pyre-ignore: 13
class Experiment(Configurable):
"""
This class is at the top level of Implicitron's config hierarchy. Its
members are high-level components necessary for training an implicit rende-
@@ -120,12 +120,16 @@ class Experiment(Configurable): # pyre-ignore: 13
will be saved here.
"""
# pyre-fixme[13]: Attribute `data_source` is never initialized.
data_source: DataSourceBase
data_source_class_type: str = "ImplicitronDataSource"
# pyre-fixme[13]: Attribute `model_factory` is never initialized.
model_factory: ModelFactoryBase
model_factory_class_type: str = "ImplicitronModelFactory"
# pyre-fixme[13]: Attribute `optimizer_factory` is never initialized.
optimizer_factory: OptimizerFactoryBase
optimizer_factory_class_type: str = "ImplicitronOptimizerFactory"
# pyre-fixme[13]: Attribute `training_loop` is never initialized.
training_loop: TrainingLoopBase
training_loop_class_type: str = "ImplicitronTrainingLoop"

View File

@@ -45,7 +45,7 @@ class ModelFactoryBase(ReplaceableBase):
@registry.register
class ImplicitronModelFactory(ModelFactoryBase): # pyre-ignore [13]
class ImplicitronModelFactory(ModelFactoryBase):
"""
A factory class that initializes an implicit rendering model.
@@ -61,6 +61,7 @@ class ImplicitronModelFactory(ModelFactoryBase): # pyre-ignore [13]
"""
# pyre-fixme[13]: Attribute `model` is never initialized.
model: ImplicitronModelBase
model_class_type: str = "GenericModel"
resume: bool = True

View File

@@ -30,13 +30,13 @@ from .utils import seed_all_random_engines
logger = logging.getLogger(__name__)
# pyre-fixme[13]: Attribute `evaluator` is never initialized.
class TrainingLoopBase(ReplaceableBase):
"""
Members:
evaluator: An EvaluatorBase instance, used to evaluate training results.
"""
# pyre-fixme[13]: Attribute `evaluator` is never initialized.
evaluator: Optional[EvaluatorBase]
evaluator_class_type: Optional[str] = "ImplicitronEvaluator"

View File

@@ -6,4 +6,4 @@
# pyre-unsafe
__version__ = "0.7.7"
__version__ = "0.7.8"

View File

@@ -99,6 +99,7 @@ PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("marching_cubes", &MarchingCubes);
// Pulsar.
// Pulsar not enabled on AMD.
#ifdef PULSAR_LOGGING_ENABLED
c10::ShowLogInfoToStderr();
#endif

View File

@@ -36,11 +36,13 @@
#pragma nv_diag_suppress 2951
#pragma nv_diag_suppress 2967
#else
#if !defined(USE_ROCM)
#pragma diag_suppress = attribute_not_allowed
#pragma diag_suppress = 1866
#pragma diag_suppress = 2941
#pragma diag_suppress = 2951
#pragma diag_suppress = 2967
#endif //! USE_ROCM
#endif
#else // __CUDACC__
#define INLINE inline
@@ -56,7 +58,9 @@
#pragma clang diagnostic pop
#ifdef WITH_CUDA
#include <ATen/cuda/CUDAContext.h>
#if !defined(USE_ROCM)
#include <vector_functions.h>
#endif //! USE_ROCM
#else
#ifndef cudaStream_t
typedef void* cudaStream_t;

View File

@@ -59,6 +59,11 @@ getLastCudaError(const char* errorMessage, const char* file, const int line) {
#define SHARED __shared__
#define ACTIVEMASK() __activemask()
#define BALLOT(mask, val) __ballot_sync((mask), val)
/* TODO (ROCM-6.2): None of the WARP_* are used anywhere and ROCM-6.2 natively
* supports __shfl_*. Disabling until the move to ROCM-6.2.
*/
#if !defined(USE_ROCM)
/**
* Find the cumulative sum within a warp up to the current
* thread lane, with each mask thread contributing base.
@@ -115,6 +120,7 @@ INLINE DEVICE float3 WARP_SUM_FLOAT3(
ret.z = WARP_SUM(group, mask, base.z);
return ret;
}
#endif //! USE_ROCM
// Floating point.
// #define FMUL(a, b) __fmul_rn((a), (b))
@@ -142,6 +148,7 @@ INLINE DEVICE float3 WARP_SUM_FLOAT3(
#define FMA(x, y, z) __fmaf_rn((x), (y), (z))
#define I2F(a) __int2float_rn(a)
#define FRCP(x) __frcp_rn(x)
#if !defined(USE_ROCM)
__device__ static float atomicMax(float* address, float val) {
int* address_as_i = (int*)address;
int old = *address_as_i, assumed;
@@ -166,6 +173,7 @@ __device__ static float atomicMin(float* address, float val) {
} while (assumed != old);
return __int_as_float(old);
}
#endif //! USE_ROCM
#define DMAX(a, b) FMAX(a, b)
#define DMIN(a, b) FMIN(a, b)
#define DSQRT(a) sqrt(a)

View File

@@ -14,7 +14,7 @@
#include "./commands.h"
namespace pulsar {
IHD CamGradInfo::CamGradInfo() {
IHD CamGradInfo::CamGradInfo(int x) {
cam_pos = make_float3(0.f, 0.f, 0.f);
pixel_0_0_center = make_float3(0.f, 0.f, 0.f);
pixel_dir_x = make_float3(0.f, 0.f, 0.f);

View File

@@ -63,7 +63,7 @@ inline bool operator==(const CamInfo& a, const CamInfo& b) {
};
struct CamGradInfo {
HOST DEVICE CamGradInfo();
HOST DEVICE CamGradInfo(int = 0);
float3 cam_pos;
float3 pixel_0_0_center;
float3 pixel_dir_x;

View File

@@ -24,7 +24,7 @@
// #pragma diag_suppress = 68
#include <ATen/cuda/CUDAContext.h>
// #pragma pop
#include "../cuda/commands.h"
#include "../gpu/commands.h"
#else
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Weverything"

View File

@@ -46,6 +46,7 @@ IHD float3 outer_product_sum(const float3& a) {
}
// TODO: put intrinsics here.
#if !defined(USE_ROCM)
IHD float3 operator+(const float3& a, const float3& b) {
return make_float3(a.x + b.x, a.y + b.y, a.z + b.z);
}
@@ -93,6 +94,7 @@ IHD float3 operator*(const float3& a, const float3& b) {
IHD float3 operator*(const float& a, const float3& b) {
return b * a;
}
#endif //! USE_ROCM
INLINE DEVICE float length(const float3& v) {
// TODO: benchmark what's faster.

View File

@@ -283,9 +283,15 @@ GLOBAL void render(
(percent_allowed_difference > 0.f &&
max_closest_possible_intersection > depth_threshold) ||
tracker.get_n_hits() >= max_n_hits;
#if defined(__CUDACC__) && defined(__HIP_PLATFORM_AMD__)
unsigned long long warp_done = __ballot(done);
int warp_done_bit_cnt = __popcll(warp_done);
#else
uint warp_done = thread_warp.ballot(done);
int warp_done_bit_cnt = POPC(warp_done);
#endif //__CUDACC__ && __HIP_PLATFORM_AMD__
if (thread_warp.thread_rank() == 0)
ATOMICADD_B(&n_pixels_done, POPC(warp_done));
ATOMICADD_B(&n_pixels_done, warp_done_bit_cnt);
// This sync is necessary to keep n_loaded until all threads are done with
// painting.
thread_block.sync();

View File

@@ -213,8 +213,8 @@ std::tuple<size_t, size_t, bool, torch::Tensor> Renderer::arg_check(
const float& gamma,
const float& max_depth,
float& min_depth,
const c10::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity,
const std::optional<torch::Tensor>& bg_col,
const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference,
const uint& max_n_hits,
const uint& mode) {
@@ -668,8 +668,8 @@ std::tuple<torch::Tensor, torch::Tensor> Renderer::forward(
const float& gamma,
const float& max_depth,
float min_depth,
const c10::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity,
const std::optional<torch::Tensor>& bg_col,
const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference,
const uint& max_n_hits,
const uint& mode) {
@@ -888,14 +888,14 @@ std::tuple<torch::Tensor, torch::Tensor> Renderer::forward(
};
std::tuple<
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>>
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>>
Renderer::backward(
const torch::Tensor& grad_im,
const torch::Tensor& image,
@@ -912,8 +912,8 @@ Renderer::backward(
const float& gamma,
const float& max_depth,
float min_depth,
const c10::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity,
const std::optional<torch::Tensor>& bg_col,
const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference,
const uint& max_n_hits,
const uint& mode,
@@ -922,7 +922,7 @@ Renderer::backward(
const bool& dif_rad,
const bool& dif_cam,
const bool& dif_opy,
const at::optional<std::pair<uint, uint>>& dbg_pos) {
const std::optional<std::pair<uint, uint>>& dbg_pos) {
this->ensure_on_device(this->device_tracker.device());
size_t batch_size;
size_t n_points;
@@ -1045,14 +1045,14 @@ Renderer::backward(
}
// Prepare the return value.
std::tuple<
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>>
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>>
ret;
if (mode == 1 || (!dif_pos && !dif_col && !dif_rad && !dif_cam && !dif_opy)) {
return ret;

View File

@@ -44,21 +44,21 @@ struct Renderer {
const float& gamma,
const float& max_depth,
float min_depth,
const c10::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity,
const std::optional<torch::Tensor>& bg_col,
const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference,
const uint& max_n_hits,
const uint& mode);
std::tuple<
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>,
at::optional<torch::Tensor>>
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>,
std::optional<torch::Tensor>>
backward(
const torch::Tensor& grad_im,
const torch::Tensor& image,
@@ -75,8 +75,8 @@ struct Renderer {
const float& gamma,
const float& max_depth,
float min_depth,
const c10::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity,
const std::optional<torch::Tensor>& bg_col,
const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference,
const uint& max_n_hits,
const uint& mode,
@@ -85,7 +85,7 @@ struct Renderer {
const bool& dif_rad,
const bool& dif_cam,
const bool& dif_opy,
const at::optional<std::pair<uint, uint>>& dbg_pos);
const std::optional<std::pair<uint, uint>>& dbg_pos);
// Infrastructure.
/**
@@ -115,8 +115,8 @@ struct Renderer {
const float& gamma,
const float& max_depth,
float& min_depth,
const c10::optional<torch::Tensor>& bg_col,
const c10::optional<torch::Tensor>& opacity,
const std::optional<torch::Tensor>& bg_col,
const std::optional<torch::Tensor>& opacity,
const float& percent_allowed_difference,
const uint& max_n_hits,
const uint& mode);

View File

@@ -144,7 +144,7 @@ __device__ void CheckPixelInsideFace(
const bool zero_face_area =
(face_area <= kEpsilon && face_area >= -1.0f * kEpsilon);
if (zmax < 0 || cull_backfaces && back_face || outside_bbox ||
if (zmax < 0 || (cull_backfaces && back_face) || outside_bbox ||
zero_face_area) {
return;
}

View File

@@ -18,6 +18,8 @@ const auto vEpsilon = 1e-8;
// Common functions and operators for float2.
// Complex arithmetic is already defined for AMD.
#if !defined(USE_ROCM)
__device__ inline float2 operator-(const float2& a, const float2& b) {
return make_float2(a.x - b.x, a.y - b.y);
}
@@ -41,6 +43,7 @@ __device__ inline float2 operator*(const float2& a, const float2& b) {
__device__ inline float2 operator*(const float a, const float2& b) {
return make_float2(a * b.x, a * b.y);
}
#endif
__device__ inline float FloatMin3(const float a, const float b, const float c) {
return fminf(a, fminf(b, c));

View File

@@ -23,37 +23,51 @@ WarpReduceMin(scalar_t* min_dists, int64_t* min_idxs, const size_t tid) {
min_idxs[tid] = min_idxs[tid + 32];
min_dists[tid] = min_dists[tid + 32];
}
// AMD does not use explicit syncwarp and instead automatically inserts memory
// fences during compilation.
#if !defined(USE_ROCM)
__syncwarp();
#endif
// s = 16
if (min_dists[tid] > min_dists[tid + 16]) {
min_idxs[tid] = min_idxs[tid + 16];
min_dists[tid] = min_dists[tid + 16];
}
#if !defined(USE_ROCM)
__syncwarp();
#endif
// s = 8
if (min_dists[tid] > min_dists[tid + 8]) {
min_idxs[tid] = min_idxs[tid + 8];
min_dists[tid] = min_dists[tid + 8];
}
#if !defined(USE_ROCM)
__syncwarp();
#endif
// s = 4
if (min_dists[tid] > min_dists[tid + 4]) {
min_idxs[tid] = min_idxs[tid + 4];
min_dists[tid] = min_dists[tid + 4];
}
#if !defined(USE_ROCM)
__syncwarp();
#endif
// s = 2
if (min_dists[tid] > min_dists[tid + 2]) {
min_idxs[tid] = min_idxs[tid + 2];
min_dists[tid] = min_dists[tid + 2];
}
#if !defined(USE_ROCM)
__syncwarp();
#endif
// s = 1
if (min_dists[tid] > min_dists[tid + 1]) {
min_idxs[tid] = min_idxs[tid + 1];
min_dists[tid] = min_dists[tid + 1];
}
#if !defined(USE_ROCM)
__syncwarp();
#endif
}
template <typename scalar_t>
@@ -65,30 +79,42 @@ __device__ void WarpReduceMax(
dists[tid] = dists[tid + 32];
dists_idx[tid] = dists_idx[tid + 32];
}
#if !defined(USE_ROCM)
__syncwarp();
#endif
if (dists[tid] < dists[tid + 16]) {
dists[tid] = dists[tid + 16];
dists_idx[tid] = dists_idx[tid + 16];
}
#if !defined(USE_ROCM)
__syncwarp();
#endif
if (dists[tid] < dists[tid + 8]) {
dists[tid] = dists[tid + 8];
dists_idx[tid] = dists_idx[tid + 8];
}
#if !defined(USE_ROCM)
__syncwarp();
#endif
if (dists[tid] < dists[tid + 4]) {
dists[tid] = dists[tid + 4];
dists_idx[tid] = dists_idx[tid + 4];
}
#if !defined(USE_ROCM)
__syncwarp();
#endif
if (dists[tid] < dists[tid + 2]) {
dists[tid] = dists[tid + 2];
dists_idx[tid] = dists_idx[tid + 2];
}
#if !defined(USE_ROCM)
__syncwarp();
#endif
if (dists[tid] < dists[tid + 1]) {
dists[tid] = dists[tid + 1];
dists_idx[tid] = dists_idx[tid + 1];
}
#if !defined(USE_ROCM)
__syncwarp();
#endif
}

View File

@@ -41,7 +41,7 @@ class DataSourceBase(ReplaceableBase):
@registry.register
class ImplicitronDataSource(DataSourceBase): # pyre-ignore[13]
class ImplicitronDataSource(DataSourceBase):
"""
Represents the data used in Implicitron. This is the only implementation
of DataSourceBase provided.
@@ -52,8 +52,11 @@ class ImplicitronDataSource(DataSourceBase): # pyre-ignore[13]
data_loader_map_provider_class_type: identifies type for data_loader_map_provider.
"""
# pyre-fixme[13]: Attribute `dataset_map_provider` is never initialized.
dataset_map_provider: DatasetMapProviderBase
# pyre-fixme[13]: Attribute `dataset_map_provider_class_type` is never initialized.
dataset_map_provider_class_type: str
# pyre-fixme[13]: Attribute `data_loader_map_provider` is never initialized.
data_loader_map_provider: DataLoaderMapProviderBase
data_loader_map_provider_class_type: str = "SequenceDataLoaderMapProvider"

View File

@@ -276,6 +276,7 @@ class FrameData(Mapping[str, Any]):
image_size_hw=tuple(self.effective_image_size_hw), # pyre-ignore
)
crop_bbox_xywh = bbox_xyxy_to_xywh(clamp_bbox_xyxy)
self.crop_bbox_xywh = crop_bbox_xywh
if self.fg_probability is not None:
self.fg_probability = crop_around_box(

View File

@@ -66,7 +66,7 @@ _NEED_CONTROL: Tuple[str, ...] = (
@registry.register
class JsonIndexDatasetMapProvider(DatasetMapProviderBase): # pyre-ignore [13]
class JsonIndexDatasetMapProvider(DatasetMapProviderBase):
"""
Generates the training / validation and testing dataset objects for
a dataset laid out on disk like Co3D, with annotations in json files.
@@ -95,6 +95,7 @@ class JsonIndexDatasetMapProvider(DatasetMapProviderBase): # pyre-ignore [13]
path_manager_factory_class_type: The class type of `path_manager_factory`.
"""
# pyre-fixme[13]: Attribute `category` is never initialized.
category: str
task_str: str = "singlesequence"
dataset_root: str = _CO3D_DATASET_ROOT
@@ -104,8 +105,10 @@ class JsonIndexDatasetMapProvider(DatasetMapProviderBase): # pyre-ignore [13]
test_restrict_sequence_id: int = -1
assert_single_seq: bool = False
only_test_set: bool = False
# pyre-fixme[13]: Attribute `dataset` is never initialized.
dataset: JsonIndexDataset
dataset_class_type: str = "JsonIndexDataset"
# pyre-fixme[13]: Attribute `path_manager_factory` is never initialized.
path_manager_factory: PathManagerFactory
path_manager_factory_class_type: str = "PathManagerFactory"

View File

@@ -56,7 +56,7 @@ logger = logging.getLogger(__name__)
@registry.register
class JsonIndexDatasetMapProviderV2(DatasetMapProviderBase): # pyre-ignore [13]
class JsonIndexDatasetMapProviderV2(DatasetMapProviderBase):
"""
Generates the training, validation, and testing dataset objects for
a dataset laid out on disk like CO3Dv2, with annotations in gzipped json files.
@@ -171,7 +171,9 @@ class JsonIndexDatasetMapProviderV2(DatasetMapProviderBase): # pyre-ignore [13]
path_manager_factory_class_type: The class type of `path_manager_factory`.
"""
# pyre-fixme[13]: Attribute `category` is never initialized.
category: str
# pyre-fixme[13]: Attribute `subset_name` is never initialized.
subset_name: str
dataset_root: str = _CO3DV2_DATASET_ROOT
@@ -183,8 +185,10 @@ class JsonIndexDatasetMapProviderV2(DatasetMapProviderBase): # pyre-ignore [13]
n_known_frames_for_test: int = 0
dataset_class_type: str = "JsonIndexDataset"
# pyre-fixme[13]: Attribute `dataset` is never initialized.
dataset: JsonIndexDataset
# pyre-fixme[13]: Attribute `path_manager_factory` is never initialized.
path_manager_factory: PathManagerFactory
path_manager_factory_class_type: str = "PathManagerFactory"

View File

@@ -32,7 +32,7 @@ from .utils import DATASET_TYPE_KNOWN
@registry.register
class RenderedMeshDatasetMapProvider(DatasetMapProviderBase): # pyre-ignore [13]
class RenderedMeshDatasetMapProvider(DatasetMapProviderBase):
"""
A simple single-scene dataset based on PyTorch3D renders of a mesh.
Provides `num_views` renders of the mesh as train, with no val
@@ -76,6 +76,7 @@ class RenderedMeshDatasetMapProvider(DatasetMapProviderBase): # pyre-ignore [13
resolution: int = 128
use_point_light: bool = True
gpu_idx: Optional[int] = 0
# pyre-fixme[13]: Attribute `path_manager_factory` is never initialized.
path_manager_factory: PathManagerFactory
path_manager_factory_class_type: str = "PathManagerFactory"

View File

@@ -83,7 +83,6 @@ class SingleSceneDataset(DatasetBase, Configurable):
return self.eval_batches
# pyre-fixme[13]: Uninitialized attribute
class SingleSceneDatasetMapProviderBase(DatasetMapProviderBase):
"""
Base for provider of data for one scene from LLFF or blender datasets.
@@ -100,8 +99,11 @@ class SingleSceneDatasetMapProviderBase(DatasetMapProviderBase):
testing frame.
"""
# pyre-fixme[13]: Attribute `base_dir` is never initialized.
base_dir: str
# pyre-fixme[13]: Attribute `object_name` is never initialized.
object_name: str
# pyre-fixme[13]: Attribute `path_manager_factory` is never initialized.
path_manager_factory: PathManagerFactory
path_manager_factory_class_type: str = "PathManagerFactory"
n_known_frames_for_test: Optional[int] = None

View File

@@ -348,6 +348,7 @@ def adjust_camera_to_image_scale_(
camera: PerspectiveCameras,
original_size_wh: torch.Tensor,
new_size_wh: torch.LongTensor,
# pyre-fixme[7]: Expected `PerspectiveCameras` but got implicit return value of `None`.
) -> PerspectiveCameras:
focal_length_px, principal_point_px = _convert_ndc_to_pixels(
camera.focal_length[0],
@@ -367,7 +368,7 @@ def adjust_camera_to_image_scale_(
image_size_wh_output,
)
camera.focal_length = focal_length_scaled[None]
camera.principal_point = principal_point_scaled[None] # pyre-ignore
camera.principal_point = principal_point_scaled[None]
# NOTE this cache is per-worker; they are implemented as processes.

View File

@@ -65,7 +65,7 @@ logger = logging.getLogger(__name__)
@registry.register
class GenericModel(ImplicitronModelBase): # pyre-ignore: 13
class GenericModel(ImplicitronModelBase):
"""
GenericModel is a wrapper for the neural implicit
rendering and reconstruction pipeline which consists
@@ -226,34 +226,42 @@ class GenericModel(ImplicitronModelBase): # pyre-ignore: 13
# ---- global encoder settings
global_encoder_class_type: Optional[str] = None
# pyre-fixme[13]: Attribute `global_encoder` is never initialized.
global_encoder: Optional[GlobalEncoderBase]
# ---- raysampler
raysampler_class_type: str = "AdaptiveRaySampler"
# pyre-fixme[13]: Attribute `raysampler` is never initialized.
raysampler: RaySamplerBase
# ---- renderer configs
renderer_class_type: str = "MultiPassEmissionAbsorptionRenderer"
# pyre-fixme[13]: Attribute `renderer` is never initialized.
renderer: BaseRenderer
# ---- image feature extractor settings
# (This is only created if view_pooler is enabled)
# pyre-fixme[13]: Attribute `image_feature_extractor` is never initialized.
image_feature_extractor: Optional[FeatureExtractorBase]
image_feature_extractor_class_type: Optional[str] = None
# ---- view pooler settings
view_pooler_enabled: bool = False
# pyre-fixme[13]: Attribute `view_pooler` is never initialized.
view_pooler: Optional[ViewPooler]
# ---- implicit function settings
implicit_function_class_type: str = "NeuralRadianceFieldImplicitFunction"
# This is just a model, never constructed.
# The actual implicit functions live in self._implicit_functions
# pyre-fixme[13]: Attribute `implicit_function` is never initialized.
implicit_function: ImplicitFunctionBase
# ----- metrics
# pyre-fixme[13]: Attribute `view_metrics` is never initialized.
view_metrics: ViewMetricsBase
view_metrics_class_type: str = "ViewMetrics"
# pyre-fixme[13]: Attribute `regularization_metrics` is never initialized.
regularization_metrics: RegularizationMetricsBase
regularization_metrics_class_type: str = "RegularizationMetrics"

View File

@@ -59,12 +59,13 @@ class GlobalEncoderBase(ReplaceableBase):
# TODO: probabilistic embeddings?
@registry.register
class SequenceAutodecoder(GlobalEncoderBase, torch.nn.Module): # pyre-ignore: 13
class SequenceAutodecoder(GlobalEncoderBase, torch.nn.Module):
"""
A global encoder implementation which provides an autodecoder encoding
of the frame's sequence identifier.
"""
# pyre-fixme[13]: Attribute `autodecoder` is never initialized.
autodecoder: Autodecoder
def __post_init__(self):

View File

@@ -244,7 +244,6 @@ class MLPWithInputSkips(Configurable, torch.nn.Module):
@registry.register
# pyre-fixme[13]: Attribute `network` is never initialized.
class MLPDecoder(DecoderFunctionBase):
"""
Decoding function which uses `MLPWithIputSkips` to convert the embedding to output.
@@ -272,6 +271,7 @@ class MLPDecoder(DecoderFunctionBase):
input_dim: int = 3
param_groups: Dict[str, str] = field(default_factory=lambda: {})
# pyre-fixme[13]: Attribute `network` is never initialized.
network: MLPWithInputSkips
def __post_init__(self):

View File

@@ -318,10 +318,11 @@ class SRNRaymarchHyperNet(Configurable, torch.nn.Module):
@registry.register
# pyre-fixme[13]: Uninitialized attribute
class SRNImplicitFunction(ImplicitFunctionBase, torch.nn.Module):
latent_dim: int = 0
# pyre-fixme[13]: Attribute `raymarch_function` is never initialized.
raymarch_function: SRNRaymarchFunction
# pyre-fixme[13]: Attribute `pixel_generator` is never initialized.
pixel_generator: SRNPixelGenerator
def __post_init__(self):
@@ -366,7 +367,6 @@ class SRNImplicitFunction(ImplicitFunctionBase, torch.nn.Module):
@registry.register
# pyre-fixme[13]: Uninitialized attribute
class SRNHyperNetImplicitFunction(ImplicitFunctionBase, torch.nn.Module):
"""
This implicit function uses a hypernetwork to generate the
@@ -377,7 +377,9 @@ class SRNHyperNetImplicitFunction(ImplicitFunctionBase, torch.nn.Module):
latent_dim_hypernet: int = 0
latent_dim: int = 0
# pyre-fixme[13]: Attribute `hypernet` is never initialized.
hypernet: SRNRaymarchHyperNet
# pyre-fixme[13]: Attribute `pixel_generator` is never initialized.
pixel_generator: SRNPixelGenerator
def __post_init__(self):

View File

@@ -805,7 +805,6 @@ class VMFactorizedVoxelGrid(VoxelGridBase):
)
# pyre-fixme[13]: Attribute `voxel_grid` is never initialized.
class VoxelGridModule(Configurable, torch.nn.Module):
"""
A wrapper torch.nn.Module for the VoxelGrid classes, which
@@ -845,6 +844,7 @@ class VoxelGridModule(Configurable, torch.nn.Module):
"""
voxel_grid_class_type: str = "FullResolutionVoxelGrid"
# pyre-fixme[13]: Attribute `voxel_grid` is never initialized.
voxel_grid: VoxelGridBase
extents: Tuple[float, float, float] = (2.0, 2.0, 2.0)

View File

@@ -39,7 +39,6 @@ enable_get_default_args(HarmonicEmbedding)
@registry.register
# pyre-ignore[13]
class VoxelGridImplicitFunction(ImplicitFunctionBase, torch.nn.Module):
"""
This implicit function consists of two streams, one for the density calculation and one
@@ -145,9 +144,11 @@ class VoxelGridImplicitFunction(ImplicitFunctionBase, torch.nn.Module):
"""
# ---- voxel grid for density
# pyre-fixme[13]: Attribute `voxel_grid_density` is never initialized.
voxel_grid_density: VoxelGridModule
# ---- voxel grid for color
# pyre-fixme[13]: Attribute `voxel_grid_color` is never initialized.
voxel_grid_color: VoxelGridModule
# ---- harmonic embeddings density
@@ -163,10 +164,12 @@ class VoxelGridImplicitFunction(ImplicitFunctionBase, torch.nn.Module):
# ---- decoder function for density
decoder_density_class_type: str = "MLPDecoder"
# pyre-fixme[13]: Attribute `decoder_density` is never initialized.
decoder_density: DecoderFunctionBase
# ---- decoder function for color
decoder_color_class_type: str = "MLPDecoder"
# pyre-fixme[13]: Attribute `decoder_color` is never initialized.
decoder_color: DecoderFunctionBase
# ---- cuda streams

View File

@@ -69,7 +69,7 @@ IMPLICIT_FUNCTION_ARGS_TO_REMOVE: List[str] = [
@registry.register
class OverfitModel(ImplicitronModelBase): # pyre-ignore: 13
class OverfitModel(ImplicitronModelBase):
"""
OverfitModel is a wrapper for the neural implicit
rendering and reconstruction pipeline which consists
@@ -198,27 +198,34 @@ class OverfitModel(ImplicitronModelBase): # pyre-ignore: 13
# ---- global encoder settings
global_encoder_class_type: Optional[str] = None
# pyre-fixme[13]: Attribute `global_encoder` is never initialized.
global_encoder: Optional[GlobalEncoderBase]
# ---- raysampler
raysampler_class_type: str = "AdaptiveRaySampler"
# pyre-fixme[13]: Attribute `raysampler` is never initialized.
raysampler: RaySamplerBase
# ---- renderer configs
renderer_class_type: str = "MultiPassEmissionAbsorptionRenderer"
# pyre-fixme[13]: Attribute `renderer` is never initialized.
renderer: BaseRenderer
# ---- implicit function settings
share_implicit_function_across_passes: bool = False
implicit_function_class_type: str = "NeuralRadianceFieldImplicitFunction"
# pyre-fixme[13]: Attribute `implicit_function` is never initialized.
implicit_function: ImplicitFunctionBase
coarse_implicit_function_class_type: Optional[str] = None
# pyre-fixme[13]: Attribute `coarse_implicit_function` is never initialized.
coarse_implicit_function: Optional[ImplicitFunctionBase]
# ----- metrics
# pyre-fixme[13]: Attribute `view_metrics` is never initialized.
view_metrics: ViewMetricsBase
view_metrics_class_type: str = "ViewMetrics"
# pyre-fixme[13]: Attribute `regularization_metrics` is never initialized.
regularization_metrics: RegularizationMetricsBase
regularization_metrics_class_type: str = "RegularizationMetrics"

View File

@@ -18,9 +18,7 @@ from .raymarcher import RaymarcherBase
@registry.register
class MultiPassEmissionAbsorptionRenderer( # pyre-ignore: 13
BaseRenderer, torch.nn.Module
):
class MultiPassEmissionAbsorptionRenderer(BaseRenderer, torch.nn.Module):
"""
Implements the multi-pass rendering function, in particular,
with emission-absorption ray marching used in NeRF [1]. First, it evaluates
@@ -86,6 +84,7 @@ class MultiPassEmissionAbsorptionRenderer( # pyre-ignore: 13
"""
raymarcher_class_type: str = "EmissionAbsorptionRaymarcher"
# pyre-fixme[13]: Attribute `raymarcher` is never initialized.
raymarcher: RaymarcherBase
n_pts_per_ray_fine_training: int = 64

View File

@@ -16,8 +16,6 @@ from pytorch3d.renderer.implicit.sample_pdf import sample_pdf
@expand_args_fields
# pyre-fixme[13]: Attribute `n_pts_per_ray` is never initialized.
# pyre-fixme[13]: Attribute `random_sampling` is never initialized.
class RayPointRefiner(Configurable, torch.nn.Module):
"""
Implements the importance sampling of points along rays.
@@ -45,7 +43,9 @@ class RayPointRefiner(Configurable, torch.nn.Module):
for Anti-Aliasing Neural Radiance Fields." ICCV 2021.
"""
# pyre-fixme[13]: Attribute `n_pts_per_ray` is never initialized.
n_pts_per_ray: int
# pyre-fixme[13]: Attribute `random_sampling` is never initialized.
random_sampling: bool
add_input_samples: bool = True
blurpool_weights: bool = False

View File

@@ -24,9 +24,10 @@ from .rgb_net import RayNormalColoringNetwork
@registry.register
class SignedDistanceFunctionRenderer(BaseRenderer, torch.nn.Module): # pyre-ignore[13]
class SignedDistanceFunctionRenderer(BaseRenderer, torch.nn.Module):
render_features_dimensions: int = 3
object_bounding_sphere: float = 1.0
# pyre-fixme[13]: Attribute `ray_tracer` is never initialized.
ray_tracer: RayTracing
ray_normal_coloring_network_args: DictConfig = get_default_args_field(
RayNormalColoringNetwork

View File

@@ -121,7 +121,6 @@ def weighted_sum_losses(
return None
loss = sum(losses_weighted)
assert torch.is_tensor(loss)
# pyre-fixme[7]: Expected `Optional[Tensor]` but got `int`.
return loss

View File

@@ -16,7 +16,6 @@ from .feature_aggregator import FeatureAggregatorBase
from .view_sampler import ViewSampler
# pyre-ignore: 13
class ViewPooler(Configurable, torch.nn.Module):
"""
Implements sampling of image-based features at the 2d projections of a set
@@ -35,8 +34,10 @@ class ViewPooler(Configurable, torch.nn.Module):
from a set of source images. FeatureAggregator executes step (4) above.
"""
# pyre-fixme[13]: Attribute `view_sampler` is never initialized.
view_sampler: ViewSampler
feature_aggregator_class_type: str = "AngleWeightedReductionFeatureAggregator"
# pyre-fixme[13]: Attribute `feature_aggregator` is never initialized.
feature_aggregator: FeatureAggregatorBase
def __post_init__(self):

View File

@@ -156,7 +156,6 @@ def render_point_cloud_pytorch3d(
cumprod = torch.cat((torch.ones_like(cumprod[..., :1]), cumprod[..., :-1]), dim=-1)
depths = (weights * cumprod * fragments.zbuf).sum(dim=-1)
# add the rendering mask
# pyre-fixme[6]: For 1st param expected `Tensor` but got `float`.
render_mask = -torch.prod(1.0 - weights, dim=-1) + 1.0
# cat depths and render mask

View File

@@ -163,6 +163,9 @@ def _read_chunks(
if binary_data is not None:
binary_data = np.frombuffer(binary_data, dtype=np.uint8)
# pyre-fixme[7]: Expected `Optional[Tuple[Dict[str, typing.Any],
# ndarray[typing.Any, typing.Any]]]` but got `Tuple[typing.Any,
# Optional[ndarray[typing.Any, dtype[typing.Any]]]]`.
return json_data, binary_data

View File

@@ -409,6 +409,7 @@ def _parse_mtl(
texture_files = {}
material_name = ""
# pyre-fixme[9]: f has type `str`; used as `IO[typing.Any]`.
with _open_file(f, path_manager, "r") as f:
for line in f:
tokens = line.strip().split()

View File

@@ -649,8 +649,7 @@ def _load_obj(
# Create an array of strings of material names for each face.
# If faces_materials_idx == -1 then that face doesn't have a material.
idx = faces_materials_idx.cpu().numpy()
face_material_names = np.array(material_names)[idx] # (F,)
face_material_names[idx == -1] = ""
face_material_names = np.array([""] + material_names)[idx + 1] # (F,)
# Construct the atlas.
texture_atlas = make_mesh_texture_atlas(
@@ -756,10 +755,13 @@ def save_obj(
output_path = Path(f)
# Save the .obj file
# pyre-fixme[9]: f has type `Union[Path, str]`; used as `IO[typing.Any]`.
with _open_file(f, path_manager, "w") as f:
if save_texture:
# Add the header required for the texture info to be loaded correctly
obj_header = "\nmtllib {0}.mtl\nusemtl mesh\n\n".format(output_path.stem)
# pyre-fixme[16]: Item `Path` of `Union[Path, str]` has no attribute
# `write`.
f.write(obj_header)
_save(
f,

View File

@@ -1250,6 +1250,9 @@ def _save_ply(
if verts_normals is not None:
verts_dtype.append(("normals", np.float32, 3))
if verts_colors is not None:
# pyre-fixme[6]: For 1st argument expected `Tuple[str,
# Type[floating[_32Bit]], int]` but got `Tuple[str,
# Type[Union[floating[_32Bit], unsignedinteger[typing.Any]]], int]`.
verts_dtype.append(("colors", color_np_type, 3))
vert_data = np.zeros(verts.shape[0], dtype=verts_dtype)

View File

@@ -27,8 +27,10 @@ def _validate_chamfer_reduction_inputs(
"""
if batch_reduction is not None and batch_reduction not in ["mean", "sum"]:
raise ValueError('batch_reduction must be one of ["mean", "sum"] or None')
if point_reduction is not None and point_reduction not in ["mean", "sum"]:
raise ValueError('point_reduction must be one of ["mean", "sum"] or None')
if point_reduction is not None and point_reduction not in ["mean", "sum", "max"]:
raise ValueError(
'point_reduction must be one of ["mean", "sum", "max"] or None'
)
if point_reduction is None and batch_reduction is not None:
raise ValueError("Batch reduction must be None if point_reduction is None")
@@ -80,7 +82,6 @@ def _chamfer_distance_single_direction(
x_normals,
y_normals,
weights,
batch_reduction: Union[str, None],
point_reduction: Union[str, None],
norm: int,
abs_cosine: bool,
@@ -103,11 +104,6 @@ def _chamfer_distance_single_direction(
raise ValueError("weights cannot be negative.")
if weights.sum() == 0.0:
weights = weights.view(N, 1)
if batch_reduction in ["mean", "sum"]:
return (
(x.sum((1, 2)) * weights).sum() * 0.0,
(x.sum((1, 2)) * weights).sum() * 0.0,
)
return ((x.sum((1, 2)) * weights) * 0.0, (x.sum((1, 2)) * weights) * 0.0)
cham_norm_x = x.new_zeros(())
@@ -135,7 +131,10 @@ def _chamfer_distance_single_direction(
if weights is not None:
cham_norm_x *= weights.view(N, 1)
if point_reduction is not None:
if point_reduction == "max":
assert not return_normals
cham_x = cham_x.max(1).values # (N,)
elif point_reduction is not None:
# Apply point reduction
cham_x = cham_x.sum(1) # (N,)
if return_normals:
@@ -146,22 +145,34 @@ def _chamfer_distance_single_direction(
if return_normals:
cham_norm_x /= x_lengths_clamped
if batch_reduction is not None:
# batch_reduction == "sum"
cham_x = cham_x.sum()
if return_normals:
cham_norm_x = cham_norm_x.sum()
if batch_reduction == "mean":
div = weights.sum() if weights is not None else max(N, 1)
cham_x /= div
if return_normals:
cham_norm_x /= div
cham_dist = cham_x
cham_normals = cham_norm_x if return_normals else None
return cham_dist, cham_normals
def _apply_batch_reduction(
cham_x, cham_norm_x, weights, batch_reduction: Union[str, None]
):
if batch_reduction is None:
return (cham_x, cham_norm_x)
# batch_reduction == "sum"
N = cham_x.shape[0]
cham_x = cham_x.sum()
if cham_norm_x is not None:
cham_norm_x = cham_norm_x.sum()
if batch_reduction == "mean":
if weights is None:
div = max(N, 1)
elif weights.sum() == 0.0:
div = 1
else:
div = weights.sum()
cham_x /= div
if cham_norm_x is not None:
cham_norm_x /= div
return (cham_x, cham_norm_x)
def chamfer_distance(
x,
y,
@@ -197,7 +208,8 @@ def chamfer_distance(
batch_reduction: Reduction operation to apply for the loss across the
batch, can be one of ["mean", "sum"] or None.
point_reduction: Reduction operation to apply for the loss across the
points, can be one of ["mean", "sum"] or None.
points, can be one of ["mean", "sum", "max"] or None. Using "max" leads to the
Hausdorff distance.
norm: int indicates the norm used for the distance. Supports 1 for L1 and 2 for L2.
single_directional: If False (default), loss comes from both the distance between
each point in x and its nearest neighbor in y and each point in y and its nearest
@@ -227,6 +239,10 @@ def chamfer_distance(
if not ((norm == 1) or (norm == 2)):
raise ValueError("Support for 1 or 2 norm.")
if point_reduction == "max" and (x_normals is not None or y_normals is not None):
raise ValueError('Normals must be None if point_reduction is "max"')
x, x_lengths, x_normals = _handle_pointcloud_input(x, x_lengths, x_normals)
y, y_lengths, y_normals = _handle_pointcloud_input(y, y_lengths, y_normals)
@@ -238,13 +254,13 @@ def chamfer_distance(
x_normals,
y_normals,
weights,
batch_reduction,
point_reduction,
norm,
abs_cosine,
)
if single_directional:
return cham_x, cham_norm_x
loss = cham_x
loss_normals = cham_norm_x
else:
cham_y, cham_norm_y = _chamfer_distance_single_direction(
y,
@@ -254,17 +270,23 @@ def chamfer_distance(
y_normals,
x_normals,
weights,
batch_reduction,
point_reduction,
norm,
abs_cosine,
)
if point_reduction is not None:
return (
cham_x + cham_y,
(cham_norm_x + cham_norm_y) if cham_norm_x is not None else None,
)
return (
(cham_x, cham_y),
(cham_norm_x, cham_norm_y) if cham_norm_x is not None else None,
)
if point_reduction == "max":
loss = torch.maximum(cham_x, cham_y)
loss_normals = None
elif point_reduction is not None:
loss = cham_x + cham_y
if cham_norm_x is not None:
loss_normals = cham_norm_x + cham_norm_y
else:
loss_normals = None
else:
loss = (cham_x, cham_y)
if cham_norm_x is not None:
loss_normals = (cham_norm_x, cham_norm_y)
else:
loss_normals = None
return _apply_batch_reduction(loss, loss_normals, weights, batch_reduction)

View File

@@ -617,6 +617,7 @@ def _splat_points_to_volumes(
w = wX * wY * wZ
# valid - binary indicators of votes that fall into the volume
# pyre-fixme[16]: `int` has no attribute `long`.
valid = (
(0 <= X_)
* (X_ < grid_sizes_xyz[:, None, 0:1])
@@ -635,14 +636,19 @@ def _splat_points_to_volumes(
idx_valid = idx * valid + rand_idx * (1 - valid)
w_valid = w * valid.type_as(w)
if mask is not None:
# pyre-fixme[6]: For 1st argument expected `Tensor` but got `int`.
w_valid = w_valid * mask.type_as(w)[:, :, None]
# scatter add casts the votes into the weight accumulator
# and the feature accumulator
# pyre-fixme[6]: For 3rd argument expected `Tensor` but got
# `Union[int, Tensor]`.
volume_densities.scatter_add_(1, idx_valid, w_valid)
# reshape idx_valid -> (minibatch, feature_dim, n_points)
idx_valid = idx_valid.view(ba, 1, n_points).expand_as(points_features)
# pyre-fixme[16]: Item `int` of `Union[int, Tensor]` has no
# attribute `view`.
w_valid = w_valid.view(ba, 1, n_points)
# volume_features of shape (minibatch, feature_dim, n_voxels)
@@ -724,6 +730,7 @@ def _round_points_to_volumes(
# valid - binary indicators of votes that fall into the volume
# pyre-fixme[9]: grid_sizes has type `LongTensor`; used as `Tensor`.
grid_sizes = grid_sizes.type_as(XYZ)
# pyre-fixme[16]: `int` has no attribute `long`.
valid = (
(0 <= X)
* (X < grid_sizes_xyz[:, None, 0:1])

View File

@@ -143,8 +143,6 @@ def convert_pointclouds_to_tensor(pcl: Union[torch.Tensor, "Pointclouds"]):
elif torch.is_tensor(pcl):
X = pcl
num_points = X.shape[1] * torch.ones( # type: ignore
# pyre-fixme[16]: Item `Pointclouds` of `Union[Pointclouds, Tensor]` has
# no attribute `shape`.
X.shape[0],
device=X.device,
dtype=torch.int64,

View File

@@ -6,6 +6,8 @@
# pyre-unsafe
import torch
from .blending import (
BlendParams,
hard_rgb_blend,

View File

@@ -212,15 +212,15 @@ def softmax_rgb_blend(
# Reshape to be compatible with (N, H, W, K) values in fragments
if torch.is_tensor(zfar):
# pyre-fixme[16]
zfar = zfar[:, None, None, None]
if torch.is_tensor(znear):
# pyre-fixme[16]: Item `float` of `Union[float, Tensor]` has no attribute
# `__getitem__`.
znear = znear[:, None, None, None]
# pyre-fixme[6]: Expected `float` but got `Union[float, Tensor]`
z_inv = (zfar - fragments.zbuf) / (zfar - znear) * mask
# pyre-fixme[6]: Expected `Tensor` but got `float`
z_inv_max = torch.max(z_inv, dim=-1).values[..., None].clamp(min=eps)
# pyre-fixme[6]: Expected `Tensor` but got `float`
weights_num = prob_map * torch.exp((z_inv - z_inv_max) / blend_params.gamma)
# Also apply exp normalize trick for the background color weight.

View File

@@ -1782,8 +1782,6 @@ def get_ndc_to_screen_transform(
K = torch.zeros((cameras._N, 4, 4), device=cameras.device, dtype=torch.float32)
if not torch.is_tensor(image_size):
image_size = torch.tensor(image_size, device=cameras.device)
# pyre-fixme[16]: Item `List` of `Union[List[typing.Any], Tensor, Tuple[Any,
# ...]]` has no attribute `view`.
image_size = image_size.view(-1, 2) # of shape (1 or B)x2
height, width = image_size.unbind(1)

View File

@@ -497,6 +497,7 @@ def clip_faces(
faces_case3 = face_verts_unclipped[case3_unclipped_idx]
# index (0, 1, or 2) of the vertex in front of the clipping plane
# pyre-fixme[61]: `faces_clipped_verts` is undefined, or not always defined.
p1_face_ind = torch.where(~faces_clipped_verts[case3_unclipped_idx])[1]
# Solve for the points p4, p5 that intersect the clipping plane
@@ -540,6 +541,7 @@ def clip_faces(
faces_case4 = face_verts_unclipped[case4_unclipped_idx]
# index (0, 1, or 2) of the vertex behind the clipping plane
# pyre-fixme[61]: `faces_clipped_verts` is undefined, or not always defined.
p1_face_ind = torch.where(faces_clipped_verts[case4_unclipped_idx])[1]
# Solve for the points p4, p5 that intersect the clipping plane

View File

@@ -6,7 +6,10 @@
# pyre-unsafe
import torch
from .compositor import AlphaCompositor, NormWeightedCompositor
from .pulsar.unified import PulsarPointsRenderer
from .rasterize_points import rasterize_points

View File

@@ -270,6 +270,8 @@ class TensorProperties(nn.Module):
# to have the same shape as the input tensor.
new_dims = len(tensor_dims) - len(idx_dims)
new_shape = idx_dims + (1,) * new_dims
# pyre-fixme[58]: `+` is not supported for operand types
# `Tuple[int]` and `torch._C.Size`
expand_dims = (-1,) + tensor_dims[1:]
_batch_idx = _batch_idx.view(*new_shape)
_batch_idx = _batch_idx.expand(*expand_dims)

View File

@@ -97,7 +97,10 @@ def _sqrt_positive_part(x: torch.Tensor) -> torch.Tensor:
"""
ret = torch.zeros_like(x)
positive_mask = x > 0
ret[positive_mask] = torch.sqrt(x[positive_mask])
if torch.is_grad_enabled():
ret[positive_mask] = torch.sqrt(x[positive_mask])
else:
ret = torch.where(positive_mask, torch.sqrt(x), ret)
return ret

View File

@@ -369,6 +369,7 @@ def plot_scene(
# update camera viewpoint if provided
if viewpoints_eye_at_up_world is not None:
# Use camera params for batch index or the first camera if only one provided.
# pyre-fixme[61]: `n_viewpoint_cameras` is undefined, or not always defined.
viewpoint_idx = min(n_viewpoint_cameras - 1, subplot_idx)
eye, at, up = (i[viewpoint_idx] for i in viewpoints_eye_at_up_world)
@@ -627,7 +628,7 @@ def _add_struct_from_batch(
def _add_mesh_trace(
fig: go.Figure, # pyre-ignore[11]
fig: go.Figure,
meshes: Meshes,
trace_name: str,
subplot_idx: int,
@@ -988,7 +989,7 @@ def _gen_fig_with_subplots(
def _update_axes_bounds(
verts_center: torch.Tensor,
max_expand: float,
current_layout: go.Scene, # pyre-ignore[11]
current_layout: go.Scene,
) -> None: # pragma: no cover
"""
Takes in the vertices' center point and max spread, and the current plotly figure

View File

@@ -59,6 +59,8 @@ def texturesuv_image_matplotlib(
for i in indices:
# setting clip_on=False makes it obvious when
# we have UV coordinates outside the correct range
# pyre-fixme[6]: For 1st argument expected `Tuple[float, float]` but got
# `ndarray[Any, Any]`.
ax.add_patch(Circle(centers[i], radius, color=color, clip_on=False))

View File

@@ -153,7 +153,7 @@ setup(
)
+ [trainer],
package_dir={trainer: "projects/implicitron_trainer"},
install_requires=["fvcore", "iopath"],
install_requires=["iopath"],
extras_require={
"all": ["matplotlib", "tqdm>4.29.0", "imageio", "ipywidgets"],
"dev": ["flake8", "usort"],

View File

@@ -847,6 +847,85 @@ class TestChamfer(TestCaseMixin, unittest.TestCase):
loss, loss_norm, pred_loss[0], pred_loss_norm[0], p1, p11, p2, p22
)
def test_chamfer_point_reduction_max(self):
"""
Compare output of vectorized chamfer loss with naive implementation
for point_reduction = "max" and batch_reduction = None.
"""
N, P1, P2 = 7, 10, 18
device = get_random_cuda_device()
points_normals = TestChamfer.init_pointclouds(N, P1, P2, device)
p1 = points_normals.p1
p2 = points_normals.p2
weights = points_normals.weights
p11 = p1.detach().clone()
p22 = p2.detach().clone()
p11.requires_grad = True
p22.requires_grad = True
pred_loss, unused_pred_loss_norm = TestChamfer.chamfer_distance_naive(
p1, p2, x_normals=None, y_normals=None
)
loss, loss_norm = chamfer_distance(
p11,
p22,
x_normals=None,
y_normals=None,
weights=weights,
batch_reduction=None,
point_reduction="max",
)
pred_loss_max = torch.maximum(
pred_loss[0].max(1).values, pred_loss[1].max(1).values
)
pred_loss_max *= weights
self.assertClose(loss, pred_loss_max)
self.assertIsNone(loss_norm)
# Check gradients
self._check_gradients(loss, loss_norm, pred_loss_max, None, p1, p11, p2, p22)
def test_single_directional_chamfer_point_reduction_max(self):
"""
Compare output of vectorized single directional chamfer loss with naive implementation
for point_reduction = "max" and batch_reduction = None.
"""
N, P1, P2 = 7, 10, 18
device = get_random_cuda_device()
points_normals = TestChamfer.init_pointclouds(N, P1, P2, device)
p1 = points_normals.p1
p2 = points_normals.p2
weights = points_normals.weights
p11 = p1.detach().clone()
p22 = p2.detach().clone()
p11.requires_grad = True
p22.requires_grad = True
pred_loss, unused_pred_loss_norm = TestChamfer.chamfer_distance_naive(
p1, p2, x_normals=None, y_normals=None
)
loss, loss_norm = chamfer_distance(
p11,
p22,
x_normals=None,
y_normals=None,
weights=weights,
batch_reduction=None,
point_reduction="max",
single_directional=True,
)
pred_loss_max = pred_loss[0].max(1).values
pred_loss_max *= weights
self.assertClose(loss, pred_loss_max)
self.assertIsNone(loss_norm)
# Check gradients
self._check_gradients(loss, loss_norm, pred_loss_max, None, p1, p11, p2, p22)
def _check_gradients(
self,
loss,
@@ -1020,9 +1099,9 @@ class TestChamfer(TestCaseMixin, unittest.TestCase):
with self.assertRaisesRegex(ValueError, "batch_reduction must be one of"):
chamfer_distance(p1, p2, weights=weights, batch_reduction="max")
# Error when point_reduction is not in ["mean", "sum"] or None.
# Error when point_reduction is not in ["mean", "sum", "max"] or None.
with self.assertRaisesRegex(ValueError, "point_reduction must be one of"):
chamfer_distance(p1, p2, weights=weights, point_reduction="max")
chamfer_distance(p1, p2, weights=weights, point_reduction="min")
def test_incorrect_weights(self):
N, P1, P2 = 16, 64, 128