314 Commits

Author SHA1 Message Date
Christoph Lassner
e7c1f026ea [pulsar] Removing LOGGER.debug statements for performance gain.
We identified that these logging statements can deteriorate performance in certain cases. I propose removing them from the regular renderer implementation and letting individuals re-insert debug logging wherever needed on a case-by-case basis.
2022-07-25 09:08:58 -07:00
Krzysztof Chalupka
cb49550486 Add MeshRasterizerOpenGL
Summary:
Adding MeshRasterizerOpenGL, a faster alternative to MeshRasterizer. The new rasterizer follows the ideas from "Differentiable Surface Rendering via non-Differentiable Sampling".

The new rasterizer 20x faster on a 2M face mesh (try pose optimization on Nefertiti from https://www.cs.cmu.edu/~kmcrane/Projects/ModelRepository/!). The larger the mesh, the larger the speedup.

There are two main disadvantages:
* The new rasterizer works with an OpenGL backend, so requires pycuda.gl and pyopengl installed (though we avoided writing any C++ code, everything is in Python!)
* The new rasterizer is non-differentiable. However, you can still differentiate the rendering function if you use if with the new SplatterPhongShader which we recently added to PyTorch3D (see the original paper cited above).

Reviewed By: patricklabatut, jcjohnson

Differential Revision: D37698816

fbshipit-source-id: 54d120639d3cb001f096237807e54aced0acda25
2022-07-22 15:52:50 -07:00
Krzysztof Chalupka
36edf2b302 Add .to methods to the splatter and SplatterPhongShader.
Summary: Needed to properly change devices during OpenGL rasterization.

Reviewed By: jcjohnson

Differential Revision: D37698568

fbshipit-source-id: 38968149d577322e662d3b5d04880204b0a7be29
2022-07-22 14:36:22 -07:00
Krzysztof Chalupka
78bb6d17fa Add EGLContext and DeviceContextManager
Summary:
EGLContext is a utility to render with OpenGL without an attached display (that is, without a monitor).

DeviceContextManager allows us to avoid unnecessary context creations and releases. See docstrings for more info.

Reviewed By: jcjohnson

Differential Revision: D36562551

fbshipit-source-id: eb0d2a2f85555ee110e203d435a44ad243281d2c
2022-07-22 09:43:05 -07:00
Jeremy Reizenstein
54c75b4114 GM error for unbatched inputs
Summary: Error when sending an unbatched FrameData through GM.

Reviewed By: shapovalov

Differential Revision: D38036286

fbshipit-source-id: b8d280c61fbbefdc112c57ccd630ab3ccce7b44e
2022-07-21 15:10:24 -07:00
Jeremy Reizenstein
3783437d2f lazy all_train_cameras
Summary: Avoid calculating all_train_cameras before it is needed, because it is slow in some datasets.

Reviewed By: shapovalov

Differential Revision: D38037157

fbshipit-source-id: 95461226655cde2626b680661951ab17ebb0ec75
2022-07-21 15:04:00 -07:00
Jeremy Reizenstein
b2dc520210 lints
Summary: lint issues (mostly flake) in implicitron

Reviewed By: patricklabatut

Differential Revision: D37920948

fbshipit-source-id: 8cb3c2a2838d111c80a211c98a404c210d4649ed
2022-07-21 13:33:49 -07:00
Jeremy Reizenstein
8597d4c5c1 dependencies for testing
Summary: We especially need omegaconf when testing impicitron.

Reviewed By: patricklabatut

Differential Revision: D37921440

fbshipit-source-id: 4e66fde35aa29f60eabd92bf459cd584cfd7e5ca
2022-07-21 13:22:19 -07:00
Jeremy Reizenstein
38fd8380f7 fix ndc/screen problem in blender/llff (#39)
Summary:
X-link: https://github.com/fairinternal/pytorch3d/pull/39

Blender and LLFF cameras were sending screen space focal length and principal point to a camera init function expecting NDC

Reviewed By: shapovalov

Differential Revision: D37788686

fbshipit-source-id: 2ddf7436248bc0d174eceb04c288b93858138582
2022-07-19 10:38:13 -07:00
Jeremy Reizenstein
67840f8320 multiseq conditioning type
Summary: Add the conditioning types to the repro yaml files. In particular, this fixes test_conditioning_type.

Reviewed By: shapovalov

Differential Revision: D37914537

fbshipit-source-id: 621390f329d9da662d915eb3b7bc709206a20552
2022-07-18 03:11:40 -07:00
Jeremy Reizenstein
9b2e570536 option to avoid accelerate
Summary: For debugging, introduce PYTORCH3D_NO_ACCELERATE env var.

Reviewed By: shapovalov

Differential Revision: D37885393

fbshipit-source-id: de080080c0aa4b6d874028937083a0113bb97c23
2022-07-17 13:15:59 -07:00
Iurii Makarov
0f966217e5 Fixed typing to have compatibility with OmegaConf 2.2.2 in Pytorch3D
Summary:
I tried to run `experiment.py` and `pytorch3d_implicitron_runner` and faced the failure with this traceback: https://www.internalfb.com/phabricator/paste/view/P515734086

It seems to be due to the new release of OmegaConf (version=2.2.2) which requires different typing. This fix helped to overcome it.

Reviewed By: bottler

Differential Revision: D37881644

fbshipit-source-id: be0cd4ced0526f8382cea5bdca9b340e93a2fba2
2022-07-15 05:55:03 -07:00
Jiali Duan
379c8b2780 Fix Pytorch3D PnP test
Summary:
EPnP fails the test when the number of points is below 6. As suggested, quadratic option is in theory to deal with as few as 4 points (so num_pts_thresh=3 is set). And when num_pts > num_pts_thresh=4, skip_q is False.

To avoid bumping num_pts_thresh while passing all the original tests, check_output is set to False when num_pts < 6, similar to the logic in Line 123-127.  It makes sure that the algo doesn't crash.

Reviewed By: shapovalov

Differential Revision: D37804438

fbshipit-source-id: 74576d63a9553e25e3ec344677edb6912b5f9354
2022-07-14 09:50:39 -07:00
Jeremy Reizenstein
8e0c82b89a lint fix: raise from None
Summary: New linter warning is complaining about `raise` inside `except`.

Reviewed By: kjchalup

Differential Revision: D37819264

fbshipit-source-id: 56ad5d0558ea39e1125f3c76b43b7376aea2bc7c
2022-07-14 04:21:44 -07:00
David Novotny
8ba9a694ee Remove -1 from crop mask
Summary: Removing 1 from the crop mask does not seem sensible.

Reviewed By: bottler, shapovalov

Differential Revision: D37843680

fbshipit-source-id: 70cec80f9ea26deac63312da62b9c8af27d2a010
2022-07-14 03:30:51 -07:00
Roman Shapovalov
36ba079bef Fixes to CO3Dv2 provider.
Summary:
1. Random sampling of num batches without replacement not supported.
2.Providers should implement the interface for the training loop to work.

Reviewed By: bottler, davnov134

Differential Revision: D37815388

fbshipit-source-id: 8a2795b524e733f07346ffdb20a9c0eb1a2b8190
2022-07-13 09:45:29 -07:00
Jeremy Reizenstein
b95ec190af followups to D37592429
Summary: Fixing comments on D37592429 (0dce883241)

Reviewed By: shapovalov

Differential Revision: D37752367

fbshipit-source-id: 40aa7ee4dc0c5b8b7a84a09d13a3933a9e3afedd
2022-07-13 06:07:02 -07:00
Jeremy Reizenstein
55f67b0d18 add accelerate dependency
Summary: Accelerate is an additional implicitron dependency, so document it.

Reviewed By: shapovalov

Differential Revision: D37786933

fbshipit-source-id: 11024fe604107881f8ca29e17cb5cbfe492fc7f9
2022-07-13 06:00:05 -07:00
Roman Shapovalov
4261e59f51 Fix: making visualisation work again
Summary:
1. Respecting `visdom_show_preds` parameter when it is False.
2. Clipping the images pre-visualisation, which is important for methods like SRN that are not arare of pixel value range.

Reviewed By: bottler

Differential Revision: D37786439

fbshipit-source-id: 8dbb5104290bcc5c2829716b663cae17edc911bd
2022-07-13 05:29:09 -07:00
David Novotny
af55ba01f8 Fix for box_crop=True
Summary: one more bugfix in JsonIndexDataset

Reviewed By: bottler

Differential Revision: D37789138

fbshipit-source-id: 2fb2bda7448674091ff6b279175f0bbd16ff7a62
2022-07-12 10:03:58 -07:00
Jeremy Reizenstein
d3b7f5f421 fix trainer test
Summary: After recent accelerate change D37543870 (aa8b03f31d), update interactive trainer test.

Reviewed By: shapovalov

Differential Revision: D37785932

fbshipit-source-id: 9211374323b6cfd80f6c5ff3a4fc1c0ca04b54ba
2022-07-12 07:20:21 -07:00
Tristan Rice
4ecc9ea89d shader: fix HardDepthShader sizes + tests (#1252)
Summary:
This fixes a indexing bug in HardDepthShader and adds proper unit tests for both of the DepthShaders. This bug was introduced when updating the shader sizes and discovered when I switched my local model onto pytorch3d trunk instead of the patched copy.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1252

Test Plan:
Unit test + custom model code

```
pytest tests/test_shader.py
```

![image](https://user-images.githubusercontent.com/909104/178397456-f478d0e0-9f6c-467a-a85b-adb4c47adfee.png)

Reviewed By: bottler

Differential Revision: D37775767

Pulled By: d4l3k

fbshipit-source-id: 5f001903985976d7067d1fa0a3102d602790e3e8
2022-07-12 04:38:33 -07:00
Tristan Rice
8d10ba52b2 renderer: add support for rendering high dimensional textures for classification/segmentation use cases (#1248)
Summary:
For 3D segmentation problems it's really useful to be able to train the models from multiple viewpoints using Pytorch3D as the renderer. Currently due to hardcoded assumptions in a few spots the mesh renderer only supports rendering RGB (3 dimensional) data. You can encode the classification information as 3 channel data but if you have more than 3 classes you're out of luck.

This relaxes the assumptions to make rendering semantic classes work with `HardFlatShader` and `AmbientLights` with no diffusion/specular. The other shaders/lights don't make any sense for classification since they mutate the texture values in some way.

This only requires changes in `Materials` and `AmbientLights`. The bulk of the code is the unit test.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1248

Test Plan: Added unit test that renders a 5 dimensional texture and compare dimensions 2-5 to a stored picture.

Reviewed By: bottler

Differential Revision: D37764610

Pulled By: d4l3k

fbshipit-source-id: 031895724d9318a6f6bab5b31055bb3f438176a5
2022-07-11 21:22:45 -07:00
Nikhila Ravi
aa8b03f31d Updates to support Accelerate and multigpu training (#37)
Summary:
## Changes:
- Added Accelerate Library and refactored experiment.py to use it
- Needed to move `init_optimizer` and `ExperimentConfig` to a separate file to be compatible with submitit/hydra
- Needed to make some modifications to data loaders etc to work well with the accelerate ddp wrappers
- Loading/saving checkpoints incorporates an unwrapping step so remove the ddp wrapped model

## Tests

Tested with both `torchrun` and `submitit/hydra` on two gpus locally. Here are the commands:

**Torchrun**

Modules loaded:
```sh
1) anaconda3/2021.05   2) cuda/11.3   3) NCCL/2.9.8-3-cuda.11.3   4) gcc/5.2.0. (but unload gcc when using submit)
```

```sh
torchrun --nnodes=1 --nproc_per_node=2 experiment.py --config-path ./configs --config-name repro_singleseq_nerf_test
```

**Submitit/Hydra Local test**

```sh
~/pytorch3d/projects/implicitron_trainer$ HYDRA_FULL_ERROR=1 python3.9 experiment.py --config-name repro_singleseq_nerf_test --multirun --config-path ./configs  hydra/launcher=submitit_local hydra.launcher.gpus_per_node=2 hydra.launcher.tasks_per_node=2 hydra.launcher.nodes=1
```

**Submitit/Hydra distributed test**

```sh
~/implicitron/pytorch3d$ python3.9 experiment.py --config-name repro_singleseq_nerf_test --multirun --config-path ./configs  hydra/launcher=submitit_slurm hydra.launcher.gpus_per_node=8 hydra.launcher.tasks_per_node=8 hydra.launcher.nodes=1 hydra.launcher.partition=learnlab hydra.launcher.timeout_min=4320
```

## TODOS:
- Fix distributed evaluation: currently this doesn't work as the input format to the evaluation function is not suitable for gathering across gpus (needs to be nested list/tuple/dicts of objects that satisfy `is_torch_tensor`) and currently `frame_data`  contains `Cameras` type.
- Refactor the `accelerator` object to be accessible by all functions instead of needing to pass it around everywhere? Maybe have a `Trainer` class and add it as a method?
- Update readme with installation instructions for accelerate and also commands for running jobs with torchrun and submitit/hydra

X-link: https://github.com/fairinternal/pytorch3d/pull/37

Reviewed By: davnov134, kjchalup

Differential Revision: D37543870

Pulled By: bottler

fbshipit-source-id: be9eb4e91244d4fe3740d87dafec622ae1e0cf76
2022-07-11 19:29:58 -07:00
Jeremy Reizenstein
57a40b3688 fix test
Summary: remove erroneous RandomDataLoaderMapProvider

Reviewed By: davnov134

Differential Revision: D37751116

fbshipit-source-id: cf3b555dc1e6304425914d1522b4f70407b498bf
2022-07-11 06:17:48 -07:00
David Novotny
522e5f0644 Bugfix - wrong mask bounds passed to box clamping
Summary: Fixes a bug

Reviewed By: bottler

Differential Revision: D37743350

fbshipit-source-id: d68e680d6027ae2b9814b2241fb72d3b74df77c1
2022-07-10 16:01:33 -07:00
David Novotny
e8390d3500 JsonIndexDatasetProviderV2
Summary: A new version of json index dataset provider supporting CO3Dv2

Reviewed By: shapovalov

Differential Revision: D37690918

fbshipit-source-id: bf2d5fc9d0f1220259e08661dafc69cdbe6b7f94
2022-07-09 17:16:24 -07:00
David Novotny
4300030d7a Changes for CO3Dv2 release [part1]
Summary:
Implements several changes needed for the CO3Dv2 release:
- FrameData contains crop_bbox_xywh which defines the outline of the image crop corresponding to the image-shaped tensors in FrameData
- revised the definition of a bounding box inside JsonDatasetIndex: bbox_xyxy is [xmin, ymin, xmax, ymax], where xmax, ymax are not inclusive; bbox_xywh = [xmin, ymain, xmax-xmin, ymax-ymin]
- is_filtered for detecting whether the entries of the dataset were somehow filtered
- seq_frame_index_to_dataset_index allows to skip entries that are not present in the dataset

Reviewed By: shapovalov

Differential Revision: D37687547

fbshipit-source-id: 7842756b0517878cc0964fc0935d3c0769454d78
2022-07-09 17:16:24 -07:00
Jeremy Reizenstein
00acf0b0c7 cu116 docker image
Summary: cu116 builds need to happen in a specific image.

Reviewed By: patricklabatut

Differential Revision: D37680352

fbshipit-source-id: 81bef0642ad832e83e4eba6321287759b3229303
2022-07-07 23:23:37 -07:00
Jeremy Reizenstein
a94f3f4c4b Add pytorch 1.12, drop pytorch 1.7
Summary: change deps

Reviewed By: kjchalup

Differential Revision: D37612290

fbshipit-source-id: 51af55159605b0edd89ffa9e177238466fc2d993
2022-07-06 14:36:45 -07:00
Jeremy Reizenstein
efb721320a extract camera_difficulty_bin_breaks
Summary: As part of removing Task, move camera difficulty bin breaks from hard code to the top level.

Reviewed By: davnov134

Differential Revision: D37491040

fbshipit-source-id: f2d6775ebc490f6f75020d13f37f6b588cc07a0b
2022-07-06 07:13:41 -07:00
Jeremy Reizenstein
40fb189c29 typing for trainer
Summary: Enable pyre checking of the trainer code.

Reviewed By: shapovalov

Differential Revision: D36545438

fbshipit-source-id: db1ea8d1ade2da79a2956964eb0c7ba302fa40d1
2022-07-06 07:13:41 -07:00
Jeremy Reizenstein
4e87c2b7f1 get_all_train_cameras
Summary: As part of removing Task, make the dataset code generate the source cameras for itself. There's a small optimization available here, in that the JsonIndexDataset could avoid loading images.

Reviewed By: shapovalov

Differential Revision: D37313423

fbshipit-source-id: 3e5e0b2aabbf9cc51f10547a3523e98c72ad8755
2022-07-06 07:13:41 -07:00
Jeremy Reizenstein
771cf8a328 more padding options in Dataloader
Summary: Add facilities for dataloading non-sequential scenes.

Reviewed By: shapovalov

Differential Revision: D37291277

fbshipit-source-id: 0a33e3727b44c4f0cba3a2abe9b12f40d2a20447
2022-07-06 07:13:41 -07:00
David Novotny
0dce883241 Refactor autodecoders
Summary: Refactors autodecoders. Tests pass.

Reviewed By: bottler

Differential Revision: D37592429

fbshipit-source-id: 8f5c9eac254e1fdf0704d5ec5f69eb42f6225113
2022-07-04 07:18:03 -07:00
Krzysztof Chalupka
ae35824f21 Refactor ViewMetrics
Summary:
Make ViewMetrics easy to replace by putting them into an OmegaConf dataclass.

Also, re-word a few variable names and fix minor TODOs.

Reviewed By: bottler

Differential Revision: D37327157

fbshipit-source-id: 78d8e39bbb3548b952f10abbe05688409fb987cc
2022-06-30 09:22:01 -07:00
Brian Hirsh
f4dd151037 fix internal index.Tensor test on wrong device
Summary: After landing https://github.com/pytorch/pytorch/pull/69607, that made it an error to use indexing with `cpu_tensor[cuda_indices]`. There was one outstanding test in fbcode that incorrectly used indexing in that way, which is fixed here

Reviewed By: bottler, osalpekar

Differential Revision: D37128838

fbshipit-source-id: 611b6f717b5b5d89fa61fd9ebeb513ad7e65a656
2022-06-29 09:30:37 -07:00
Roman Shapovalov
7ce8ed55e1 Fix: typo in dict processing
Summary:
David had his code crashed when using frame_annot["meta"] dictionary. Turns out we had a typo.
The tests were passing by chance since all the keys were single-character strings.

Reviewed By: bottler

Differential Revision: D37503987

fbshipit-source-id: c12b0df21116cfbbc4675a0182b9b9e6d62bad2e
2022-06-28 16:11:49 -07:00
Tristan Rice
7e0146ece4 shader: add SoftDepthShader and HardDepthShader for rendering depth maps (#36)
Summary:
X-link: https://github.com/fairinternal/pytorch3d/pull/36

This adds two shaders for rendering depth maps for meshes. This is useful for structure from motion applications that learn depths based off of camera pair disparities.

There's two shaders, one hard which just returns the distances and then a second that does a cumsum on the probabilities of the points with a weighted sum. Areas that don't have any z faces are set to the zfar distance.

Output from this renderer is `[N, H, W]` since it's just depth no need for channels.

I haven't tested this in an ML model yet just in a notebook.

hard:
![hardzshader](https://user-images.githubusercontent.com/909104/170190363-ef662c97-0bd2-488c-8675-0557a3c7dd06.png)

soft:
![softzshader](https://user-images.githubusercontent.com/909104/170190365-65b08cd7-0c49-4119-803e-d33c1d8c676e.png)

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1208

Reviewed By: bottler

Differential Revision: D36682194

Pulled By: d4l3k

fbshipit-source-id: 5d4e10c6fb0fff5427be4ddd3bd76305a7ccc1e2
2022-06-26 04:01:29 -07:00
Ignacio Rocco
0e4c53c612 Fix link in generic_model.py (#38)
Summary: Pull Request resolved: https://github.com/fairinternal/pytorch3d/pull/38

Reviewed By: ignacio-rocco

Differential Revision: D37415027

Pulled By: bottler

fbshipit-source-id: 9b17049e4762506cd5c152fd6e244d5f0d97855b
2022-06-24 06:41:06 -07:00
Jeremy Reizenstein
879495d38f omegaconf 2.2.2 compatibility
Summary: OmegaConf 2.2.2 doesn't like heterogenous tuples or Sequence or Set members. Workaround this.

Reviewed By: shapovalov

Differential Revision: D37278736

fbshipit-source-id: 123e6657947f5b27514910e4074c92086a457a2a
2022-06-24 04:18:01 -07:00
Jeremy Reizenstein
5c1ca757bb srn/idr followups
Summary: small followup to D37172537 (cba26506b6) and D37209012 (81d63c6382): changing default #harmonics and improving a test

Reviewed By: shapovalov

Differential Revision: D37412357

fbshipit-source-id: 1af1005a129425fd24fa6dd213d69c71632099a0
2022-06-24 04:07:15 -07:00
Jeremy Reizenstein
3e4fb0b9d9 provide fg_probability for blender data
Summary: The blender synthetic dataset contains object masks in the alpha channel. Provide these in the corresponding dataset.

Reviewed By: shapovalov

Differential Revision: D37344380

fbshipit-source-id: 3ddacad9d667c0fa0ae5a61fb1d2ffc806c9abf3
2022-06-22 06:11:50 -07:00
Jeremy Reizenstein
731ea53c80 Llff & blender convention fix
Summary: Images were coming out in the wrong format.

Reviewed By: shapovalov

Differential Revision: D37291278

fbshipit-source-id: c10871c37dd186982e7abf2071ac66ed583df2e6
2022-06-22 05:54:54 -07:00
Jeremy Reizenstein
2e42ef793f register ImplicitronDataSource
Summary: Just register ImplicitronDataSource. We don't use it as pluggable yet here.

Reviewed By: shapovalov

Differential Revision: D37315698

fbshipit-source-id: ac41153383f9ab6b14ac69a3dfdc44aca0d94995
2022-06-22 04:24:14 -07:00
Jeremy Reizenstein
81d63c6382 idr harmonic_fns and doc
Summary: Document the inputs of idr functions and distinguish n_harmonic_functions to be 0 (simple embedding) versus -1 (no embedding).

Reviewed By: davnov134

Differential Revision: D37209012

fbshipit-source-id: 6e5c3eae54c4e5e8c3f76cad1caf162c6c222d52
2022-06-20 13:48:34 -07:00
Jeremy Reizenstein
28c1afaa9d nesting n_known_frames_for_test
Summary: Use generator.permutation instead of choice so that different options for n_known_frames_for_test are nested.

Reviewed By: davnov134

Differential Revision: D37210906

fbshipit-source-id: fd0d34ce62260417c3f63354a3f750aae9998b0d
2022-06-20 13:47:47 -07:00
Jeremy Reizenstein
cba26506b6 bg_color for lstm renderer
Summary: Allow specifying a color for non-opaque pixels in LSTMRenderer.

Reviewed By: davnov134

Differential Revision: D37172537

fbshipit-source-id: 6039726678bb7947f7d8cd04035b5023b2d5398c
2022-06-20 13:46:35 -07:00
Jeremy Reizenstein
65f667fd2e loading llff and blender datasets
Summary: Copy code from NeRF for loading LLFF data and blender synthetic data, and create dataset objects for them

Reviewed By: shapovalov

Differential Revision: D35581039

fbshipit-source-id: af7a6f3e9a42499700693381b5b147c991f57e5d
2022-06-16 03:09:15 -07:00
Pyre Bot Jr
7978ffd1e4 suppress errors in vision/fair/pytorch3d
Differential Revision: D37172764

fbshipit-source-id: a2ec367e56de2781a17f5e708eb5832ec9d7e6b4
2022-06-15 06:27:35 -07:00
John Reese
ea4f3260e4 apply new formatting config
Summary:
pyfmt now specifies a target Python version of 3.8 when formatting
with black. With this new config, black adds trailing commas to all
multiline function calls. This applies the new formatting as part
of rolling out the linttool-integration for pyfmt.

paintitblack

Reviewed By: zertosh, lisroach

Differential Revision: D37084377

fbshipit-source-id: 781a1b883a381a172e54d6e447137657977876b4
2022-06-10 16:04:56 -07:00
Jeremy Reizenstein
023a2369ae test configs are loadable
Summary: Add test that the yaml files deserialize.

Reviewed By: davnov134

Differential Revision: D36830673

fbshipit-source-id: b785d8db97b676686036760bfa2dd3fa638bda57
2022-06-10 12:22:46 -07:00
Jeremy Reizenstein
c0f88e04a0 make ExperimentConfig Configurable
Summary: Preparing for pluggables in experiment.py

Reviewed By: davnov134

Differential Revision: D36830674

fbshipit-source-id: eab499d1bc19c690798fbf7da547544df7e88fa5
2022-06-10 12:22:46 -07:00
Jeremy Reizenstein
6275283202 pluggable JsonIndexDataset
Summary: Make dataset type and args configurable on JsonIndexDatasetMapProvider.

Reviewed By: davnov134

Differential Revision: D36666705

fbshipit-source-id: 4d0a3781d9a956504f51f1c7134c04edf1eb2846
2022-06-10 12:22:46 -07:00
Jeremy Reizenstein
1d43251391 PathManagerFactory
Summary: Allow access to manifold internally by default.

Reviewed By: davnov134

Differential Revision: D36760481

fbshipit-source-id: 2a16bd40e81ef526085ac1b3f4606b63c1841428
2022-06-10 12:22:46 -07:00
Jeremy Reizenstein
1fb268dea6 allow get_default_args(JsonIndexDataset)
Summary: Changes to JsonIndexDataset to make it fit with OmegaConf.structured. Also match some default values to what the provider defaults to.

Reviewed By: davnov134

Differential Revision: D36666704

fbshipit-source-id: 65b059a1dbaa240ce85c3e8762b7c3db3b5a6e75
2022-06-10 12:22:46 -07:00
Jeremy Reizenstein
8bc0a04e86 hooks and allow registering base class
Summary: Allow a class to modify its subparts in get_default_args by defining the special function provide_config_hook.

Reviewed By: davnov134

Differential Revision: D36671081

fbshipit-source-id: 3e5b73880cb846c494a209c4479835f6352f45cf
2022-06-10 12:22:46 -07:00
Jeremy Reizenstein
5cd70067e2 Fix tests for OSS
Summary: New paths.

Reviewed By: patricklabatut

Differential Revision: D36734929

fbshipit-source-id: c0ce7ee9145ddca07ef3758d31cc3c261b088e7d
2022-06-01 13:52:26 -07:00
Krzysztof Chalupka
5b74a2cc27 Remove use of torch.tile to fix CI
Summary: Our tests fail (https://fburl.com/jmoqo9bz) because test_splatter_blend uses torch.tile, which is not supported in earlier torch versions. Replace it with tensor.extend.

Reviewed By: bottler

Differential Revision: D36796098

fbshipit-source-id: 38d5b40667f98f3163b33f44e53e96b858cfeba2
2022-06-01 08:47:26 -07:00
Roman Shapovalov
49ed7b07b1 Adapting configs.
Summary: As subj.

Reviewed By: bottler

Differential Revision: D36705775

fbshipit-source-id: 7370710e863025dc07a140b41f77a7c752e3159f
2022-05-27 02:31:47 -07:00
Jeremy Reizenstein
c6519f29f0 chamfer for empty pointclouds #1174
Summary: Fix divide by zero for empty pointcloud in chamfer. Also for empty batches. In process, needed to regularize num_points_per_cloud for empty batches.

Reviewed By: kjchalup

Differential Revision: D36311330

fbshipit-source-id: 3378ab738bee77ecc286f2110a5c8dc445960340
2022-05-26 14:56:22 -07:00
Krzysztof Chalupka
a42a89a5ba SplatterBlender follow-ups
Summary: A few minor additions I didn't fit into the SplatterBlender diffs, as requested by reviewers.

Reviewed By: jcjohnson

Differential Revision: D36682437

fbshipit-source-id: 57af995e766dfd2674b3984a3ba00aef7ca7db80
2022-05-26 13:03:57 -07:00
Jeremy Reizenstein
c31bf85a23 test runner for experiment.py
Summary: Add simple interactive testrunner for experiment.py

Reviewed By: shapovalov

Differential Revision: D35316221

fbshipit-source-id: d424bcba632eef89eefb56e18e536edb58ec6f85
2022-05-26 05:33:03 -07:00
Jeremy Reizenstein
fbd3c679ac rename ImplicitronDataset to JsonIndexDataset
Summary: The ImplicitronDataset class corresponds to JsonIndexDatasetMapProvider

Reviewed By: shapovalov

Differential Revision: D36661396

fbshipit-source-id: 80ca2ff81ef9ecc2e3d1f4e1cd14b6f66a7ec34d
2022-05-25 10:16:59 -07:00
Jeremy Reizenstein
34f648ede0 move targets
Summary: Move testing targets from pytorch3d/tests/TARGETS to pytorch3d/TARGETS.

Reviewed By: shapovalov

Differential Revision: D36186940

fbshipit-source-id: a4c52c4d99351f885e2b0bf870532d530324039b
2022-05-25 06:16:03 -07:00
Jeremy Reizenstein
f625fe1f8b further test fix
Summary: test_viewpool was inactive so missed being fixed in D36547815 (2d1c6d5d93)

Reviewed By: kjchalup

Differential Revision: D36625587

fbshipit-source-id: e7224eadfa5581fe61f10f67d2221071783de04a
2022-05-25 04:22:38 -07:00
Krzysztof Chalupka
7c25d34d22 SplatterPhongShader Benchmarks
Summary:
Benchmarking. We only use num_faces=2 for splatter, because as far as I can see one would never need to use more. Pose optimization and mesh optimization experiments (see next two diffs) showed that Splatter with 2 faces beats Softmax with 50 and 100 faces in terms of accuracy.

Results: We're slower at 64px^2. At 128px and 256px, we're slower than Softmax+50faces, but faster than Softmax+100faces. We're also slower at 10 faces/pix, but expectation as well as results show that more then 2 faces shouldn't be necessary. See also more results in .https://fburl.com/gdoc/ttv7u7hp

Reviewed By: jcjohnson

Differential Revision: D36210575

fbshipit-source-id: c8de28c8a59ce5fe21a47263bd43d2757b15d123
2022-05-24 22:31:12 -07:00
Krzysztof Chalupka
c5a83f46ef SplatterBlender
Summary: Splatting shader. See code comments for details. Same API as SoftPhongShader.

Reviewed By: jcjohnson

Differential Revision: D36354301

fbshipit-source-id: 71ee37f7ff6bb9ce028ba42a65741424a427a92d
2022-05-24 21:04:11 -07:00
Jeremy Reizenstein
1702c85bec avoid warning in ndc_grid_sample
Summary: If you miss grid_sample in recent pytorch, it gives a warning, so stop doing this.

Reviewed By: kjchalup

Differential Revision: D36410619

fbshipit-source-id: 41dd4455298645c926f4d96c2084093b3f64ee2c
2022-05-24 18:18:21 -07:00
Jeremy Reizenstein
90d00f1b2b PLY heterogenous faces fix
Summary: PLY with mixture of triangle and quadrilateral faces was failing.

Reviewed By: gkioxari

Differential Revision: D36592981

fbshipit-source-id: 5373edb2f38389ac646a75fd2e1fa7300eb8d054
2022-05-24 01:40:22 -07:00
Jeremy Reizenstein
d27ef14ec7 test_forward_pass: speedup and RE fix
Summary: Use small image size for test_all_gm_configs

Reviewed By: shapovalov

Differential Revision: D36511528

fbshipit-source-id: 2c65f518a4f23626850343a62d103f85abfabd88
2022-05-22 15:23:17 -07:00
Jeremy Reizenstein
2d1c6d5d93 simplify image_feature_extractor control
Summary: If no view pooling, don't disable image_feature_extractor. Make image_feature_extractor default to absent.

Reviewed By: davnov134

Differential Revision: D36547815

fbshipit-source-id: e51718e1bcbf65b8b365a6e894d4324f136635e9
2022-05-20 08:32:19 -07:00
Jeremy Reizenstein
9fe15da3cd ImplicitronDatasetBase -> DatasetBase
Summary: Just a rename

Reviewed By: shapovalov

Differential Revision: D36516885

fbshipit-source-id: 2126e3aee26d89a95afdb31e06942d61cbe88d5a
2022-05-20 07:50:30 -07:00
Jeremy Reizenstein
0f12c51646 data_loader_map_provider
Summary: replace dataloader_zoo with a pluggable DataLoaderMapProvider.

Reviewed By: shapovalov

Differential Revision: D36475441

fbshipit-source-id: d16abb190d876940434329928f2e3f2794a25416
2022-05-20 07:50:30 -07:00
Jeremy Reizenstein
79c61a2d86 dataset_map_provider
Summary: replace dataset_zoo with a pluggable DatasetMapProvider. The logic is now in annotated_file_dataset_map_provider.

Reviewed By: shapovalov

Differential Revision: D36443965

fbshipit-source-id: 9087649802810055e150b2fbfcc3c197a761f28a
2022-05-20 07:50:30 -07:00
Jeremy Reizenstein
69c6d06ed8 New file for ImplicitronDatasetBase
Summary: Separate ImplicitronDatasetBase and FrameData (to be used by all data sources) from ImplicitronDataset (which is specific).

Reviewed By: shapovalov

Differential Revision: D36413111

fbshipit-source-id: 3725744cde2e08baa11aff4048237ba10c7efbc6
2022-05-20 07:50:30 -07:00
Jeremy Reizenstein
73dc109dba data_source
Summary:
Move dataset_args and dataloader_args from ExperimentConfig into a new member called datasource so that it can contain replaceables.

Also add enum Task for task type.

Reviewed By: shapovalov

Differential Revision: D36201719

fbshipit-source-id: 47d6967bfea3b7b146b6bbd1572e0457c9365871
2022-05-20 07:50:30 -07:00
Jeremy Reizenstein
9ec9d057cc Make feature extractor pluggable
Summary: Make ResNetFeatureExtractor be an implementation of FeatureExtractorBase.

Reviewed By: davnov134

Differential Revision: D35433098

fbshipit-source-id: 0664a9166a88e150231cfe2eceba017ae55aed3a
2022-05-18 08:50:18 -07:00
Jeremy Reizenstein
cd7b885169 don't check black version
Summary: skip checking the version of black because `black --version` looks different in different versions.

Reviewed By: kjchalup

Differential Revision: D36441262

fbshipit-source-id: a2d9a5cad4f5433909fb85bc9a584e91a2b72601
2022-05-17 09:08:06 -07:00
Jeremy Reizenstein
f632c423ef FrameAnnotation.meta, Optional in _dataclass_from_dict
Summary: Allow extra data in a FrameAnnotation. Therefore allow Optional[T] systematically in _dataclass_from_dict

Reviewed By: davnov134

Differential Revision: D36442691

fbshipit-source-id: ba70f6491574c08b0d9c9acb63f35514d29de214
2022-05-17 08:16:29 -07:00
Jeremy Reizenstein
f36b11fe49 allow Optional[Dict]=None in config
Summary: Fix recently observed case where enable_get_default_args was missing things declared as Optional[something mutable]=None.

Reviewed By: davnov134

Differential Revision: D36440492

fbshipit-source-id: 192ec07564c325b3b24ccc49b003788f67c63a3d
2022-05-17 05:06:18 -07:00
Krzysztof Chalupka
ea5df60d72 In blending, pull common functionality into get_background_color
Summary: A small refactor, originally intended for use with the splatter.

Reviewed By: bottler

Differential Revision: D36210393

fbshipit-source-id: b3372f7cc7690ee45dd3059b2d4be1c8dfa63180
2022-05-16 18:23:51 -07:00
Krzysztof Chalupka
4372001981 Make transform_points_screen's with_xyflip configurable
Summary: We'll need non-flipped screen coords in splatter.

Reviewed By: bottler

Differential Revision: D36337027

fbshipit-source-id: 897f88e8854bab215d2d0e502b25d15526ee86f1
2022-05-16 18:23:51 -07:00
Krzysztof Chalupka
61e2b87019 Add ability for phong_shading to return pixel_coords
Summary: The splatter can re-use pixel coords computed by the shader.

Reviewed By: bottler

Differential Revision: D36332530

fbshipit-source-id: b28e7abe22cca4f48b4108ad397aafc0f1347901
2022-05-16 18:23:51 -07:00
Roman Shapovalov
0143d63ba8 Correcting recent bugs code after debugging on devfair.
Summary:
1. Typo in the dataset path in the config.
2. Typo in num_frames.
3. Pick sequence was cached before it was modified for single-sequence.

Reviewed By: bottler

Differential Revision: D36417329

fbshipit-source-id: 6dcd75583de510412e1ae58f63db04bb4447403e
2022-05-16 12:17:08 -07:00
Jeremy Reizenstein
899a3192b6 create_x_impl
Summary: Make create_x delegate to create_x_impl so that users can rely on create_x_impl in their overrides of create_x.

Reviewed By: shapovalov, davnov134

Differential Revision: D35929810

fbshipit-source-id: 80595894ee93346b881729995775876b016fc08e
2022-05-16 04:42:03 -07:00
John Reese
3b2300641a apply import merging for fbcode (11 of 11)
Summary:
Applies new import merging and sorting from µsort v1.0.

When merging imports, µsort will make a best-effort to move associated
comments to match merged elements, but there are known limitations due to
the diynamic nature of Python and developer tooling. These changes should
not produce any dangerous runtime changes, but may require touch-ups to
satisfy linters and other tooling.

Note that µsort uses case-insensitive, lexicographical sorting, which
results in a different ordering compared to isort. This provides a more
consistent sorting order, matching the case-insensitive order used when
sorting import statements by module name, and ensures that "frog", "FROG",
and "Frog" always sort next to each other.

For details on µsort's sorting and merging semantics, see the user guide:
https://usort.readthedocs.io/en/stable/guide.html#sorting

Reviewed By: lisroach

Differential Revision: D36402260

fbshipit-source-id: 7cb52f09b740ccc580e61e6d1787d27381a8ce00
2022-05-15 12:53:03 -07:00
Jeremy Reizenstein
b5f3d3ce12 fix test_config_use
Summary: Fixes to reenable test_create_gm_overrides. Followup from D35852367 (47d06c8924) using logic from D36349361 (9e57b994ca).

Reviewed By: shapovalov

Differential Revision: D36371762

fbshipit-source-id: ad5fbbb4b5729fac41980d118f17a2589f7e6aba
2022-05-13 07:15:26 -07:00
Jeremy Reizenstein
2c1901522a return types for dataset_zoo, dataloader_zoo
Summary: Stronger typing for these functions

Reviewed By: shapovalov

Differential Revision: D36170489

fbshipit-source-id: a2104b29dbbbcfcf91ae1d076cd6b0e3d2030c0b
2022-05-13 05:38:14 -07:00
Jeremy Reizenstein
90ab219d88 clarify expand_args_fields
Summary: Fix doc and add a call to expand_args_fields for each implicit function.

Reviewed By: shapovalov

Differential Revision: D35929811

fbshipit-source-id: 8c3cfa56b8d8908fd2165614960e3d34b54717bb
2022-05-13 03:26:47 -07:00
Jeremy Reizenstein
9e57b994ca resnet34 weights for remote executor
Summary: Like vgg16 for lpips, internally we need resnet34 weights for coming feature extractor tests.

Reviewed By: davnov134

Differential Revision: D36349361

fbshipit-source-id: 1c33009c904766fcc15e7e31cd15d0f820c57354
2022-05-12 16:57:16 -07:00
David Novotny
e767c4b548 Raysampler as pluggable
Summary:
This converts raysamplers to ReplaceableBase so that users can hack their own raysampling impls.

Context: Andrea tried to implement TensoRF within implicitron but could not due to the need to implement his own raysampler.

Reviewed By: shapovalov

Differential Revision: D36016318

fbshipit-source-id: ef746f3365282bdfa9c15f7b371090a5aae7f8da
2022-05-12 15:39:35 -07:00
David Novotny
e85fa03c5a Generic Raymarcher refactor
Summary: Uses the GenericRaymarcher only as an ABC and derives two common implementations - EA raymarcher and Cumsum raymarcher (from neural volumes)

Reviewed By: shapovalov

Differential Revision: D35927653

fbshipit-source-id: f7e6776e71f8a4e99eefc018a47f29ae769895ee
2022-05-12 14:57:50 -07:00
David Novotny
47d06c8924 ViewPooler class
Summary: Implements a ViewPooler that groups ViewSampler and FeatureAggregator.

Reviewed By: shapovalov

Differential Revision: D35852367

fbshipit-source-id: c1bcaf5a1f826ff94efce53aa5836121ad9c50ec
2022-05-12 12:50:03 -07:00
John Reese
bef959c755 formatting changes from black 22.3.0
Summary:
Applies the black-fbsource codemod with the new build of pyfmt.

paintitblack

Reviewed By: lisroach

Differential Revision: D36324783

fbshipit-source-id: 280c09e88257e5e569ab729691165d8dedd767bc
2022-05-11 19:55:56 -07:00
Krzysztof Chalupka
c21ba144e7 Add Fragments.detach()
Summary: Add a capability to detach all detachable tensors in Fragments.

Reviewed By: bottler

Differential Revision: D35918133

fbshipit-source-id: 03b5d4491a3a6791b0a7bc9119f26c1a7aa43196
2022-05-11 18:50:24 -07:00
Christian Kauten
d737a05e55 Update INSTALL.md (#1194)
Summary:
Resolve https://github.com/facebookresearch/pytorch3d/issues/1186 by fixing the minimal version of CUDA for installing from a wheel

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1194

Reviewed By: patricklabatut

Differential Revision: D36279396

Pulled By: bottler

fbshipit-source-id: 2371256a5451ec33c01d6fa9616c5b24fa83f7f8
2022-05-11 07:03:12 -07:00
David Novotny
2374d19da5 Test all CO3D model configs in test_forward_pass
Summary: Tests all possible model configs in test_forward_pass.py

Reviewed By: shapovalov

Differential Revision: D35851507

fbshipit-source-id: 4860ee1d37cf17a2faab5fc14d4b2ba0b96c4b8b
2022-05-11 05:40:05 -07:00
Pyre Bot Jr
1f3953795c suppress errors in vision/fair/pytorch3d
Differential Revision: D36269817

fbshipit-source-id: 47b8a77747e8297af3731fd0a388d4c5432dc1ff
2022-05-09 19:10:01 -07:00
Roman Shapovalov
a6dada399d Extracted ImplicitronModelBase and unified API for GenericModel and ModelDBIR
Summary:
To avoid model_zoo, we need to make GenericModel pluggable.
I also align creation APIs for convenience.

Reviewed By: bottler, davnov134

Differential Revision: D35933093

fbshipit-source-id: 8228926528eb41a795fbfbe32304b8019197e2b1
2022-05-09 15:23:07 -07:00
David Novotny
5c59841863 Add **kwargs to ViewMetrics.forward
Summary: GenericModel crashes in case the `aux` field of any Renderer is populated. This is because the `rendered.aux` is unpacked to  ViewMetrics.forward whose signature does not contain **kwargs. Hence, the contents of `aux` are unknown to forward's signature resulting in a crash.

Reviewed By: bottler

Differential Revision: D36166118

fbshipit-source-id: 906a067ea02a3648a69667422466451bc219ebf6
2022-05-09 03:04:34 -07:00
Krzysztof Chalupka
2c64635daa Add type hints to MeshRenderer(WithFragments)
Reviewed By: bottler

Differential Revision: D36148049

fbshipit-source-id: 87ca3ea8d5b5a315418cc597b36fd0a1dffb1e00
2022-05-06 14:48:26 -07:00
Jeremy Reizenstein
ec9580a1d4 test runner for eval_demo
Summary:
Create a test runner for the eval_demo code.  Debugging this is useful for understanding datasets.

Introduces an environment variable INTERACTIVE_TESTING for ignoring tests which are not intended for use in regular test runs.

Reviewed By: shapovalov

Differential Revision: D35964016

fbshipit-source-id: ab0f93aff66b6cfeca942b14466cf81f7feb2224
2022-05-06 08:31:19 -07:00
Jeremy Reizenstein
44cb00e468 lstsq fix in circle fitting for old PyTorch
Summary: the pytorch3d.compat.lstsq function needs a 2D rhs.

Reviewed By: patricklabatut

Differential Revision: D36195826

fbshipit-source-id: 9dbafea2057035cc04973f56729dc97b47dcac83
2022-05-06 04:12:51 -07:00
Jeremy Reizenstein
44ca5f95d9 Add vis to readthedocs
Summary: pytorch3d/vis has been missing. Reduce prominence of common.

Reviewed By: patricklabatut

Differential Revision: D36008733

fbshipit-source-id: bbc9fbb031c8dc95870087fa48df29410ae69e35
2022-05-06 04:07:43 -07:00
Pyre Bot Jr
a51a300827 suppress errors in fbcode/vision - batch 2
Differential Revision: D36120486

fbshipit-source-id: bddbf47957f4476f826ad20c2d6e146c98ee73e1
2022-05-03 20:29:21 -07:00
Jeremy Reizenstein
2bd65027ca version 0.6.2
Summary: Update PyTorch3D version number

Differential Revision: D35980555

fbshipit-source-id: 637ccd33eef22d909985d2fce3958c78f3d0d551
2022-04-28 04:48:24 -07:00
Jeremy Reizenstein
11635fbd7d INSTALL/README updates
Summary: Updates for version 0.6.2

Differential Revision: D35980557

fbshipit-source-id: e677a22d4f8a323376310dfb536133bee8045f1f
2022-04-28 04:48:24 -07:00
Jeremy Reizenstein
a268b18e07 update tutorials for version 0.6.2
Summary: colab is now 1.11.0

Differential Revision: D35980556

fbshipit-source-id: 988a06c652518fb61ccbef2e7197e3422a706250
2022-04-28 04:48:24 -07:00
Krzysztof Chalupka
7ea0756b05 fit_textured_mesh tutorial fixes
Summary: Updated to FoV cameras and added perspective_correct=False, otherwise it'll nan out.

Reviewed By: bottler

Differential Revision: D35970900

fbshipit-source-id: 569b8de0b124d415f4b841924ddc85585cee2dda
2022-04-27 12:18:03 -07:00
Krzysztof Chalupka
96889deab9 SplatterPhongShader 1: Pull out common Shader functionality into ShaderBase
Summary: Most of the shaders copypaste exactly the same code into `__init__` and `to`. I will be adding a new shader in the next diff, so let's make it a bit easier.

Reviewed By: bottler

Differential Revision: D35767884

fbshipit-source-id: 0057e3e2ae3be4eaa49ae7e2bf3e4176953dde9d
2022-04-27 12:07:51 -07:00
Jeremy Reizenstein
9f443ed26b isort->usort
Summary: Move from isort to usort now that usort supports sorting within lines.

Reviewed By: patricklabatut

Differential Revision: D35893280

fbshipit-source-id: 621c1cd285199d785408504430ee0bdf8683b21e
2022-04-26 08:34:54 -07:00
Jeremy Reizenstein
9320100abc object_mask only if required
Summary: New function to check if a renderer needs the object mask.

Reviewed By: davnov134

Differential Revision: D35254009

fbshipit-source-id: 4c99e8a1c0f6641d910eb32bfd6cfae9d3463d50
2022-04-26 08:01:45 -07:00
Jeremy Reizenstein
2edb93d184 chunked_inputs
Summary: Make method for SDF's use of object mask more general, so that a renderer can be given per-pixel values.

Reviewed By: shapovalov

Differential Revision: D35247412

fbshipit-source-id: 6aeccb1d0b5f1265a3f692a1453407a07e51a33c
2022-04-26 08:01:45 -07:00
Jeremy Reizenstein
41c594ca37 fix entry points in setup.py
Summary: For `pip install` without -e, we need to name the entry point functions in setup.py.

Reviewed By: patricklabatut

Differential Revision: D35933037

fbshipit-source-id: be15ae1a4bb7c5305ea2ba992d07f3279c452250
2022-04-26 07:59:15 -07:00
Krzysztof Chalupka
c3c4495c7a Fix image links in renderer documentation
Summary: Repo has jpgs but docs/website want pngs.

Reviewed By: nikhilaravi

Differential Revision: D35596475

fbshipit-source-id: 4cafd405c06c0eb339001a8db2422dbbd1f8f28a
2022-04-14 16:37:07 -07:00
Tim Hatch
34bbb3ad32 apply import merging for fbcode/vision/fair (2 of 2)
Summary:
Applies new import merging and sorting from µsort v1.0.

When merging imports, µsort will make a best-effort to move associated
comments to match merged elements, but there are known limitations due to
the diynamic nature of Python and developer tooling. These changes should
not produce any dangerous runtime changes, but may require touch-ups to
satisfy linters and other tooling.

Note that µsort uses case-insensitive, lexicographical sorting, which
results in a different ordering compared to isort. This provides a more
consistent sorting order, matching the case-insensitive order used when
sorting import statements by module name, and ensures that "frog", "FROG",
and "Frog" always sort next to each other.

For details on µsort's sorting and merging semantics, see the user guide:
https://usort.readthedocs.io/en/stable/guide.html#sorting

Reviewed By: bottler

Differential Revision: D35553814

fbshipit-source-id: be49bdb6a4c25264ff8d4db3a601f18736d17be1
2022-04-13 06:51:33 -07:00
Jeremy Reizenstein
df08ea8eb4 Fix inferred typing
Summary: D35513897 (4b94649f7b) was a pyre infer job which got some things wrong. Correct by adding the correct types, so these things shouldn't need worrying about again.

Reviewed By: patricklabatut

Differential Revision: D35546144

fbshipit-source-id: 89f6ea2b67be27aa0b0b14afff4347cccf23feb7
2022-04-13 04:40:56 -07:00
Jeremy Reizenstein
78fd5af1a6 make points2volumes feature rescaling optional
Summary: Add option to not rescale the features, giving more control. https://github.com/facebookresearch/pytorch3d/issues/1137

Reviewed By: nikhilaravi

Differential Revision: D35219577

fbshipit-source-id: cbbb643b91b71bc908cedc6dac0f63f6d1355c85
2022-04-13 04:39:47 -07:00
h5jam
0a7c354dc1 fix typo on NeRF tutorial (#1163)
Summary:
Hello, I'm Seungoh from South Korea.

I'm finding typo while I'm learning tutorials.
Wrong numbers are changed to right numbers.

Thank you.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1163

Reviewed By: patricklabatut

Differential Revision: D35546843

Pulled By: bottler

fbshipit-source-id: b6e70cdf821fd4a108dfd416e8f4bcb3ecbeb449
2022-04-13 04:35:05 -07:00
Pyre Bot Jr
b79764ea69 suppress errors in fbcode/vision - batch 2
Differential Revision: D35590813

fbshipit-source-id: 0f35d7193f839a41f3cac18bf20236b815368f19
2022-04-12 15:56:12 -07:00
Krzysztof Chalupka
b1ff9d9fd4 Disallow None vertex/face lists in texture submeshing
Summary: In order to simplify the interface, we disallow passing None as vertex/face lists to textures.submeshes. This function would only ever get called from within meshes.submeshes where we can provide both arguments, even if they're not necessary for a specific submesh type.

Reviewed By: bottler

Differential Revision: D35581161

fbshipit-source-id: aeab99308a319b144e141ca85ca7515f855116da
2022-04-12 10:46:48 -07:00
Krzysztof Chalupka
22f86072ca Submesh 4/n: TexturesVertex submeshing
Summary: Add submeshing capability for meshes with TexturesVertex.

Reviewed By: bottler

Differential Revision: D35448534

fbshipit-source-id: 6d16a31a5bfb24ce122cf3c300a7616bc58353d1
2022-04-11 16:27:53 -07:00
Krzysztof Chalupka
050f650ae8 Submesh 3/n: Add submeshing functionality
Summary:
Copypasting the docstring:
```
        Split a mesh into submeshes, defined by face indices of the original Meshes object.

        Args:
          face_indices:
            Let the original mesh have verts_list() of length N.
            Can be either
              - List of length N. The n-th element is a list of length num_submeshes_n
                (empty lists are allowed). Each element of the n-th sublist is a LongTensor
                of length num_faces.
              - List of length N. The n-th element is a possibly empty padded LongTensor of
                shape (num_submeshes_n, max_num_faces).

        Returns:
          Meshes object with selected submeshes. The submesh tensors are cloned.

        Currently submeshing only works with no textures or with the TexturesVertex texture.

        Example:

        Take a Meshes object `cubes` with 4 meshes, each a translated cube. Then:
            * len(cubes) is 4, len(cubes.verts_list()) is 4, len(cubes.faces_list()) is 4,
            * [cube_verts.size for cube_verts in cubes.verts_list()] is [8, 8, 8, 8],
            * [cube_faces.size for cube_faces in cubes.faces_list()] if [6, 6, 6, 6],

        Now let front_facet, top_and_bottom, all_facets be LongTensors of
        sizes (2), (4), and (12), each picking up a number of facets of a cube by specifying
        the appropriate triangular faces.

        Then let `subcubes = cubes.submeshes([[front_facet, top_and_bottom], [], [all_facets], []])`.
            * len(subcubes) is 3.
            * subcubes[0] is the front facet of the cube contained in cubes[0].
            * subcubes[1] is a mesh containing the (disconnected) top and bottom facets of cubes[0].
            * subcubes[2] is a clone of cubes[2].
            * There are no submeshes of cubes[1] and cubes[3] in subcubes.
            * subcubes[0] and subcubes[1] are not watertight. subcubes[2] is.
```

Reviewed By: bottler

Differential Revision: D35440657

fbshipit-source-id: 8a6d2d300ce226b5b9eb440688528b5e795195a1
2022-04-11 16:27:53 -07:00
Krzysztof Chalupka
8596fcacd2 Submesh 2/n: to_sorted
Summary:
Sort a mesh's vertices in alphabetical order, and resort the face coords accordingly. Textured meshes are not supported yet, but will be added down the stack.

This, togehter with mesh equality, can be used to compare two meshes in a way invariant to vertex permutations, as shown in the unit tests.

We do not want the submeshing mechanism to guarantee any particular vertex order, leaving that up to the implementation, so we need this function for testing.

Reviewed By: bottler

Differential Revision: D35440656

fbshipit-source-id: 5a4dd921fdb00625a33da08b5fea79e20ac6402c
2022-04-11 16:27:53 -07:00
Krzysztof Chalupka
7f097b064b Submesh 1/n: Implement mesh equality
Summary: Adding a mesh equality operator. Two Meshes objects m1, m2 are equal iff their vertex lists, face lists, and normals lists are equal. Textures meshes are not supported yet, but will be added for vertex textures down the stack.

Reviewed By: bottler, nikhilaravi

Differential Revision: D35440655

fbshipit-source-id: 69974a59c091416afdb2892896859a189f5ebf3a
2022-04-11 16:27:53 -07:00
Krzysztof Chalupka
aab95575a6 Submesh 0/n: Default to empty Meshes
Summary:
The default behavior of Meshes (with verts=None, faces=None) throws an exception:
```
meshes = Meshes()
> ValueError: Verts and Faces must be either a list or a tensor with shape (batch_size, N, 3) where N is either the maximum number of verts or faces respectively.
```

Instead, let's default to an empty mesh, following e.g. PyTorch:
```
empty_tensor = torch.FloatTensor()
> torch.tensor([])
```

this change is backwards-compatible (you can still init with verts=[], faces=[]).

Reviewed By: bottler, nikhilaravi

Differential Revision: D35443453

fbshipit-source-id: d638a8fef49a089bf0da6dd2201727b94ceb21ec
2022-04-11 16:27:53 -07:00
Georgia Gkioxari
67fff956a2 add L1 support for KNN & Chamfer
Summary:
Added L1 norm for KNN and chamfer op
* The norm is now specified with a variable `norm` which can only be 1 or 2

Reviewed By: bottler

Differential Revision: D35419637

fbshipit-source-id: 77813fec650b30c28342af90d5ed02c89133e136
2022-04-10 10:27:20 -07:00
Pyre Bot Jr
4b94649f7b Add annotations to vision/fair/pytorch3d
Reviewed By: shannonzhu

Differential Revision: D35513897

fbshipit-source-id: 1ca12671df1bd6608a7dce9193c145d5985c0b45
2022-04-08 18:23:41 -07:00
Pyre Bot Jr
3809b6094c suppress errors in vision/fair/pytorch3d
Differential Revision: D35455033

fbshipit-source-id: c4fe9577edd7beb9c40be1cb387f125d53a6a577
2022-04-06 18:53:08 -07:00
Jeremy Reizenstein
722646863c Optional[Configurable] in config
Summary: A new type of auto-expanded member of a Configurable: something of type Optional[X] where X is a Configurable. This works like X but its construction is controlled by a boolean membername_enabled.

Reviewed By: davnov134

Differential Revision: D35368269

fbshipit-source-id: 7e0c8a3e8c4930b0aa942fa1b325ce65336ebd5f
2022-04-06 05:56:14 -07:00
Jeremy Reizenstein
e10a90140d enable_get_default_args to allow pickling get_default_args(f)
Summary:
Try again to solve https://github.com/facebookresearch/pytorch3d/issues/1144 pickling problem.
D35258561 (24260130ce) didn't work.

When writing a function or vanilla class C which you want people to be able to call get_default_args on, you must add the line enable_get_default_args(C) to it. This causes autogeneration of a hidden dataclass in the module.

Reviewed By: davnov134

Differential Revision: D35364410

fbshipit-source-id: 53f6e6fff43e7142ae18ca3b06de7d0c849ef965
2022-04-06 03:32:31 -07:00
yaookyie
4c48beb226 Fix scatter_ error in cubify (#1067)
Summary:
Error Reproduction:

python=3.8.12
pytorch=1.9.1
pytorch3d=0.6.1
cudatoolkit=11.1.74

test.py:
```python
import torch
from pytorch3d.ops import cubify
voxels = torch.Tensor([[[[0,1], [0,0]], [[0,1], [0,0]]]]).float()
meshes = cubify(voxels, 0.5, device="cpu")
```

The error appears when `device="cpu"` and `pytorch=1.9.1` (works fine with pytorch=1.10.2)

Error message:
```console
/home/kyle/anaconda3/envs/adapt-net/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at  /opt/conda/conda-bld/pytorch_1631630839582/work/aten/src/ATen/native/BinaryOps.cpp:467.)
  return torch.floor_divide(self, other)
Traceback (most recent call last):
  File "test.py", line 5, in <module>
    meshes = cubify(voxels, 0.5, device="cpu")
  File "/home/kyle/anaconda3/envs/adapt-net/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/kyle/Desktop/pytorch3d/pytorch3d/ops/cubify.py", line 227, in cubify
    idleverts.scatter_(0, grid_faces.flatten(), 0)
RuntimeError: Expected index [60] to be smaller than self [27] apart from dimension 0 and to be smaller size than src [27]
```

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1067

Reviewed By: nikhilaravi

Differential Revision: D34893567

Pulled By: bottler

fbshipit-source-id: aa95980f7319302044141f7821ef48129cfa37a6
2022-04-05 13:16:36 -07:00
David Novotny
4db9fc11d2 Allow setting bin_size for render_point_clouds_pytorch3d
Summary: This is required to suppress a huge stdout full of warnings about overflown bins.

Reviewed By: bottler

Differential Revision: D35359824

fbshipit-source-id: 39214b1bdcb4a5d5debf8ed498b2ca81fa43d210
2022-04-04 09:26:54 -07:00
Jeremy Reizenstein
3b8a33e9c5 store original declared types in Configurable
Summary: Aid reflection by adding the original declared types of replaced members of a configurable as values in _processed_members.

Reviewed By: davnov134

Differential Revision: D35358422

fbshipit-source-id: 80ef3266144c51c1c2105f349e0dd3464e230429
2022-04-04 07:19:56 -07:00
Jeremy Reizenstein
199309fcf7 logging
Summary: Use logging instead of printing in the internals of implicitron.

Reviewed By: davnov134

Differential Revision: D35247581

fbshipit-source-id: be5ddad5efe1409adbae0575d35ade6112b3be63
2022-04-04 06:53:16 -07:00
Jeremy Reizenstein
6473aa316c avoid visdom import in tests
Summary: This might make the implicitron tests work better on RE.

Reviewed By: davnov134

Differential Revision: D35283131

fbshipit-source-id: 4dda9684f632ab6e9cebcbf1e6e4a8243ec00c85
2022-04-04 04:43:33 -07:00
Jeremy Reizenstein
2802fd9398 fix Optional[List] in Configurable
Summary: Optional[not_a_type] was causing errors.

Reviewed By: davnov134

Differential Revision: D35355530

fbshipit-source-id: e9b52cfd6347ffae0fe688ef30523a4092ccf9fd
2022-04-04 04:28:17 -07:00
Roman Shapovalov
a999fc22ee Type safety fixes
Summary: Pyre expects Mapping for ** operator.

Reviewed By: bottler

Differential Revision: D35288632

fbshipit-source-id: 34d6f26ad912b3a5046f440922bb6ed2fd86f533
2022-04-01 04:24:46 -07:00
Jeremy Reizenstein
24260130ce _allow_untyped for get_default_args
Summary:
ListConfig and DictConfig members of get_default_args(X) when X is a callable will contain references to a temporary dataclass and therefore be unpicklable. Avoid this in a few cases.

Fixes https://github.com/facebookresearch/pytorch3d/issues/1144

Reviewed By: shapovalov

Differential Revision: D35258561

fbshipit-source-id: e52186825f52accee9a899e466967a4ff71b3d25
2022-03-31 06:31:45 -07:00
Roman Shapovalov
a54ad2b912 get_default_args for callables respects non-class type annotations and Optionals
Summary: as subj

Reviewed By: davnov134

Differential Revision: D35194863

fbshipit-source-id: c8e8f234083d4f0f93dca8d93e090ca0e1e1972d
2022-03-29 11:36:11 -07:00
janEbert
b602edccc4 Fix dtype propagation (#1141)
Summary:
Previously, dtypes were not propagated correctly in composed transforms, resulting in errors when different dtypes were mixed. Even specifying a dtype in the constructor does not fix this. Neither does specifying the dtype for each composition function invocation (e.g. as a `kwarg` in `rotate_axis_angle`).

With the change, I also had to modify the default dtype of `RotateAxisAngle`, which was `torch.float64`; it is now `torch.float32` like for all other transforms. This was required because the fix in propagation broke some tests due to dtype mismatches.

This change in default dtype in turn broke two tests due to precision changes (calculations that were previously done in `torch.float64` were now done in `torch.float32`), so I changed the precision tolerances to be less strict. I chose the lowest power of ten that passed the tests here.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1141

Reviewed By: patricklabatut

Differential Revision: D35192970

Pulled By: bottler

fbshipit-source-id: ba0293e8b3595dfc94b3cf8048e50b7a5e5ed7cf
2022-03-29 08:57:42 -07:00
Jeremy Reizenstein
21262e38c7 Optional ReplaceableBase
Summary: Allow things like `renderer:Optional[BaseRenderer]` in configurables.

Reviewed By: davnov134

Differential Revision: D35118339

fbshipit-source-id: 1219321b2817ed4b26fe924c6d6f73887095c985
2022-03-29 08:43:46 -07:00
Jeremy Reizenstein
e332f9ffa4 test_build for implicitron
Summary: To ensure that tests outside implicitron/ don't use implicitron, split the test for recursive includes in to two. License header checking is not needed here any more.

Reviewed By: shapovalov

Differential Revision: D35077830

fbshipit-source-id: 2ebe7436a6dcc5d21a116434f6ddd08705dfab34
2022-03-29 05:09:27 -07:00
Jeremy Reizenstein
0c3bed55be setup.py for implicitron_trainer
Summary: Enable `pytorch3d_implicitron_runner` executable

Reviewed By: shapovalov

Differential Revision: D34754902

fbshipit-source-id: 213f3e9183e3f7dd7b4df16ad77d95fbc971d625
2022-03-28 04:50:26 -07:00
Jeremy Reizenstein
97894fb37b Reinforce test skipping
Summary: Attempt to solve an internal issue

Reviewed By: shapovalov

Differential Revision: D35143263

fbshipit-source-id: b4fd9ee441d85f0a3ee08f2f1e7febd1c1ccbe86
2022-03-25 07:25:54 -07:00
Roman Shapovalov
645a47d054 Return a typed structured config from default_args for callables
Summary:
Before the fix, running get_default_args(C: Callable) returns an unstructured DictConfig which causes Enums to be handled incorrectly. This is a fix.

WIP update: Currently tests still fail whenever a function signature contains an untyped argument: This needs to be somehow fixed.

Reviewed By: bottler

Differential Revision: D34932124

fbshipit-source-id: ecdc45c738633cfea5caa7480ba4f790ece931e8
2022-03-25 07:08:01 -07:00
Jeremy Reizenstein
8ac5e8f083 add missing __init__.py files
Summary: Some directories in implicitron were missing __init__.py files.

Reviewed By: patricklabatut

Differential Revision: D35076364

fbshipit-source-id: f74442766efe8694fdd47954ac4882e7c4daac60
2022-03-24 07:04:38 -07:00
Jeremy Reizenstein
92f9dfe9d6 overflow warning typo
Summary: bin_size should be 0 not -1 for naive rasterization. See https://github.com/facebookresearch/pytorch3d/issues/1129

Reviewed By: patricklabatut

Differential Revision: D35077115

fbshipit-source-id: b81ff74f47c78429977802f7dcadfd1b96676f8c
2022-03-24 06:53:35 -07:00
Jeremy Reizenstein
f2cf9d4d0b windows fix
Summary: Attempt to reduce nvcc trouble on windows by (1) avoiding flag for c++14 and (2) avoiding `torch/extension.h`, which introduces pybind11, in `.cu` files.

Reviewed By: patricklabatut

Differential Revision: D34969868

fbshipit-source-id: f3878d6a2ba9d644e87ae7b6377cb5008b4b6ce3
2022-03-24 06:52:05 -07:00
Roman Shapovalov
e2622d79c0 Using the new dataset idx API everywhere.
Summary: Using the API from D35012121 everywhere.

Reviewed By: bottler

Differential Revision: D35045870

fbshipit-source-id: dab112b5e04160334859bbe8fa2366344b6e0f70
2022-03-24 05:33:25 -07:00
Roman Shapovalov
c0bb49b5f6 API for accessing frames in order in Implicitron dataset.
Summary: We often want to iterate over frames in the sequence in temporal order. This diff provides the API to do that. `seq_to_idx` should probably be considered to have `protected` visibility.

Reviewed By: davnov134

Differential Revision: D35012121

fbshipit-source-id: 41896672ec35cd62f3ed4be3aa119efd33adada1
2022-03-24 05:33:25 -07:00
Jeremy Francis Reizenstein
05f656c01f Update ShipIt Sync
fbshipit-source-id: d20e2f3d7ae6ca8c4a1e72002c1be8d75217939d
2022-03-23 11:07:56 -07:00
Jeremy Francis Reizenstein
4c22855a23 Update ShipIt Sync
fbshipit-source-id: 29b8a643c0218375bf90b9c1fb8853dedd0906fe
2022-03-23 08:50:53 -07:00
Jeremy Reizenstein
cdd2142dd5 implicitron v0 (#1133)
Co-authored-by: Jeremy Francis Reizenstein <bottler@users.noreply.github.com>
2022-03-21 13:20:10 -07:00
Roman Shapovalov
0e377c6850 Monte-Carlo rasterisation; arbitrary dimensionality of AlphaCompositor blending
Summary:
Fixes required for MC rasterisation to work.
1) Wrong number of channels for background was used (derived from points dimensions, not features dimensions;
2) split of the results on the wrong dimension was done;
3) CORE CHANGE: blending in alpha compositor was assuming RGBA input.

Reviewed By: davnov134

Differential Revision: D34933673

fbshipit-source-id: a5cc9f201ea21e114639ab9e291a10888d495206
2022-03-17 05:12:39 -07:00
Roman Shapovalov
e64f25c255 README file.
Summary: as subj

Reviewed By: davnov134

Differential Revision: D34758227

fbshipit-source-id: c22e7c4c6e69e9ef872b46c99ece901c58c23d71
2022-03-16 07:42:52 -07:00
Jeremy Reizenstein
c85673c626 PyTorch 1.11.0
Summary: Add builds for PyTorch 1.11.0.

Reviewed By: nikhilaravi

Differential Revision: D34861021

fbshipit-source-id: 1a1c46fac48719bc66c81872e65531a48ff538ed
2022-03-16 05:44:40 -07:00
Xie Fangyuan
3de3c13a0f R2n2 (#1124)
Summary:
1. Fix https://github.com/facebookresearch/pytorch3d/issues/1115 Change the type annatations of three arguments in the initializer of `R2N2` class.
2. Fix https://github.com/facebookresearch/pytorch3d/issues/1118 Override two functions in `BlenderCamera` class reruired by subclassing `CamerasBase` class.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1124

Reviewed By: nikhilaravi

Differential Revision: D34890900

Pulled By: bottler

fbshipit-source-id: 65c385369a5964ecbb17ab28f279d5284614b487
2022-03-16 05:44:17 -07:00
Jeremy Reizenstein
9b5a3ffa6c PLY with uint face data (#1104)
Summary: Fix assumption that face indices are signed in the PLY file, as reported in #1104.

Reviewed By: nikhilaravi

Differential Revision: D34892598

fbshipit-source-id: a8b23bfac1357bdc11bbbf752098319142239804
2022-03-16 05:42:34 -07:00
Jeremy Reizenstein
1701b76a31 another meshgrid fix for old pytorch
Summary: Try to fix circleci again.

Reviewed By: nikhilaravi

Differential Revision: D34752188

fbshipit-source-id: 5966c585b61d77df1d8dd97c24383cf74dfb1fae
2022-03-11 01:14:59 -08:00
Jeremy Reizenstein
57a33b25c1 add MeshRendererWithFragments to __init__s
Summary: As noticed in https://github.com/facebookresearch/pytorch3d/issues/1098 , it would be useful to make this more available.

Reviewed By: nikhilaravi

Differential Revision: D34752526

fbshipit-source-id: 5a127bd557a0cd626f36bf194f22bc0a0a6a2436
2022-03-11 01:14:39 -08:00
Jeremy Reizenstein
c371a9a6cc rasterizer.to without cameras
Summary: As reported in https://github.com/facebookresearch/pytorch3d/pull/1100, a rasterizer couldn't be moved if it was missing the optional cameras member. Fix that. This matters because the renderer.to calls rasterizer.to, so this to() could be called even by a user who never sets a cameras member.

Reviewed By: nikhilaravi

Differential Revision: D34643841

fbshipit-source-id: 7e26e32e8bc585eb1ee533052754a7b59bc7467a
2022-03-08 23:54:38 -08:00
Jeremy Reizenstein
4a1f176054 fix _num_faces_per_mesh in join_batch
Summary: As reported in https://github.com/facebookresearch/pytorch3d/pull/1100, _num_faces_per_mesh was changing in the source mesh in join_batch. This affects both TexturesUV and TexturesAtlas

Reviewed By: nikhilaravi

Differential Revision: D34643675

fbshipit-source-id: d67bdaca7278f18a76cfb15ba59d0ea85575bd36
2022-03-08 23:54:37 -08:00
dmitryvinn
16d0aa82c1 docs: add social banner in support of Ukraine (#1101)
Summary:
Our mission at [Meta Open Source](https://opensource.facebook.com/) is to empower communities through open source, and we believe that it means building a welcoming and safe environment for all. As a part of this work, we are adding this banner in support for Ukraine during this crisis.

![image](https://user-images.githubusercontent.com/12485205/156670302-756fac7c-51ba-4e24-b463-f79730dbaba6.png)

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1101

Reviewed By: bottler

Differential Revision: D34628257

Pulled By: dmitryvinn-fb

fbshipit-source-id: 5863afb59a2b9431e8e2ebc5856254ab0cdfcfe8
2022-03-04 01:51:16 -08:00
Jeremy Reizenstein
69b27d160e reallow scalar background color for point rendering
Summary: A scalar background color is not meant to be allowed for the point renderer. It used to be ignored with a warning, but a recent code change made it an error. It was being used, at least in the black (value=0.0) case. Re-enable it.

Reviewed By: nikhilaravi

Differential Revision: D34519651

fbshipit-source-id: d37dcf145bb7b8999c9265cf8fc39b084059dd18
2022-03-01 05:12:55 -08:00
Andres Suarez
84a569c0aa Fix unnecessary LICENSELINT suppressions
Reviewed By: zsol

Differential Revision: D34526295

fbshipit-source-id: f511370dc3186bc396d68a2e6d5e0931facbeb42
2022-02-28 11:53:40 -08:00
Winnie Lin
471b126818 add min_triangle_area argument to IsInsideTriangle
Summary:
1. changed IsInsideTriangle in geometry_utils to take in min_triangle_area parameter instead of hardcoded value
2. updated point_mesh_cpu.cpp and point_mesh_cuda.[h/cu] to adapt to changes in geometry_utils function signatures
3. updated point_mesh_distance.py and test_point_mesh_distance.py to modify _C. calls

Reviewed By: bottler

Differential Revision: D34459764

fbshipit-source-id: 0549e78713c6d68f03d85fb597a13dd88e09b686
2022-02-25 12:43:04 -08:00
Jeremy Reizenstein
4d043fc9ac PyTorch 1.7 compatibility
Summary: Small changes discovered based on circleCI failures.

Reviewed By: patricklabatut

Differential Revision: D34426807

fbshipit-source-id: 819860f34b2f367dd24057ca7490284204180a13
2022-02-25 07:53:34 -08:00
Jeremy Reizenstein
f816568735 rename types to avoid clash
Summary: There are cases where importing pytorch3d seems to fail (internally at Meta) because of a clash between the builtin types module and ours, so rename ours.

Reviewed By: patricklabatut

Differential Revision: D34426817

fbshipit-source-id: f175448db6a4967a9a3f7bb6f595aad2ffb36455
2022-02-25 07:53:34 -08:00
Jeremy Reizenstein
0e88b21de6 Use newer circleci image
Summary:
Run the circleci tests with a non depracated circleci image. Small fix for PyTorch 1.7.
We no longer need to manually install nvidia-docker or the CUDA driver.

Reviewed By: patricklabatut

Differential Revision: D34426816

fbshipit-source-id: d6c67bfb0ff86dff8d8f7fe7b8801657c2e80030
2022-02-25 07:53:34 -08:00
Theo-Cheynel
1cbf80dab6 Added matrix_to_axis_angle to the exports of transforms (#1085)
Summary:
# Changelist
- `matrix_to_axis_angle` was declared in `pytorch3d/transforms/rotation_conversions.py` but never exported from the `__init__` file.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1085

Reviewed By: patricklabatut

Differential Revision: D34379935

Pulled By: bottler

fbshipit-source-id: 993c12a176630f91d0f107f298f458b2b35032e5
2022-02-21 11:27:13 -08:00
Georgia Gkioxari
ee71c7c447 small numerical fix to point_mesh
Summary: Small fix by adjusting the area `eps` to account for really small faces when computing point to face distances

Reviewed By: bottler

Differential Revision: D34331336

fbshipit-source-id: 51c4888ea46fefa4e31d5b0bb494a9f9d77813cd
2022-02-21 09:26:38 -08:00
Georgia Gkioxari
3de41223dd lower eps
Summary: Lower the epsilon value in the IoU3D calculation to fix small numerical issue from GH#1082

Reviewed By: bottler

Differential Revision: D34371597

fbshipit-source-id: 12443fa359b7755ef4ae60e9adf83734a1a295ae
2022-02-21 09:26:38 -08:00
Jeremy Reizenstein
967a099231 Use dataclasses inside ply_io.
Summary: Refactor ply_io to make it easier to add new features. Mostly taken from the starting code I attached to https://github.com/facebookresearch/pytorch3d/issues/904.

Reviewed By: patricklabatut

Differential Revision: D34375978

fbshipit-source-id: ec017d31f07c6f71ba6d97a0623bb10be1e81212
2022-02-21 07:24:21 -08:00
Jeremy Reizenstein
feb5d36394 points2vols test fix
Summary: Fix tests which depended on output tensors being identical to input ones, which now fail in main PyTorch branch because of some change in autograd. The functions still work in-place.

Reviewed By: patricklabatut

Differential Revision: D34375817

fbshipit-source-id: 295ae195f75eab6c7abab412c997470d8de8add1
2022-02-21 07:24:21 -08:00
Jeremy Reizenstein
db1f7c4506 avoid symeig
Summary: Use the newer eigh to avoid deprecation warnings in newer pytorch.

Reviewed By: patricklabatut

Differential Revision: D34375784

fbshipit-source-id: 40efe0d33fdfa071fba80fc97ed008cbfd2ef249
2022-02-21 07:24:21 -08:00
Alex Greene
59972b121d flexible background color for point compositing
Summary:
Modified the compositor background color tests to account for either a 3rd or 4th channel. Also replaced hard coding of channel value with C.

Implemented changes to alpha channel appending logic, and cleaned up extraneous warnings and checks, per task instructions.

Fixes https://github.com/facebookresearch/pytorch3d/issues/1048

Reviewed By: bottler

Differential Revision: D34305312

fbshipit-source-id: 2176c3bdd897d1a2ba6ff4c6fa801fea889e4f02
2022-02-18 07:01:22 -08:00
Jeremy Reizenstein
c8f3d6bc0b Fix Transform3d.stack of compositions
Summary:
Add a test for Transform3d.stack, and make it work with composed transformations.

Fixes https://github.com/facebookresearch/pytorch3d/issues/1072 .

Reviewed By: patricklabatut

Differential Revision: D34211920

fbshipit-source-id: bfbd0895494ca2ad3d08a61bc82ba23637e168cc
2022-02-15 06:52:41 -08:00
Jeremy Reizenstein
2a1de3b610 move LinearWithRepeat to pytorch3d
Summary: Move this simple layer from the NeRF project into pytorch3d.

Reviewed By: shapovalov

Differential Revision: D34126972

fbshipit-source-id: a9c6d6c3c1b662c1b844ea5d1b982007d4df83e6
2022-02-14 04:52:30 -08:00
Sergei Ovchinnikov
ef21a6f6aa Importing obj files without usemtl
Summary:
When there is no "usemtl" statement in the .obj file use material from .mtl if there is one.
https://github.com/facebookresearch/pytorch3d/issues/1068

Reviewed By: bottler

Differential Revision: D34141152

fbshipit-source-id: 7a5b5cc3f0bb287dc617f68de2cd085db8f7ad94
2022-02-10 09:39:44 -08:00
David Novotny
12f20d799e Convert from Pytorch3D NDC coordinates to grid_sample coordinates.
Summary: Implements a utility function to convert from 2D coordinates in Pytorch3D NDC space to the coordinates in grid_sample.

Reviewed By: shapovalov

Differential Revision: D33741394

fbshipit-source-id: 88981653356588fe646e6dea48fe7f7298738437
2022-02-09 12:49:55 -08:00
Jeremy Reizenstein
47c0997227 Followup D33970393 (auto typing)
Summary: D33970393 (e9fb6c27e3) ran an inference to add some typing. Remove some where it was a bit too confident. (Also fix some pyre errors in plotly_vis caused by new mismatch.)

Reviewed By: patricklabatut

Differential Revision: D34004689

fbshipit-source-id: 430182b0ff0b91be542a3120da6d6b1d2b247c59
2022-02-09 12:42:56 -08:00
Pyre Bot Jr
e9fb6c27e3 Add annotations to vision/fair/pytorch3d
Reviewed By: shannonzhu

Differential Revision: D33970393

fbshipit-source-id: 9b4dfaccfc3793fd37705a923d689cb14c9d26ba
2022-02-03 01:46:32 -08:00
Jeremy Reizenstein
c2862ff427 use workaround for points_normals
Summary:
Use existing workaround for batched 3x3 symeig because it is faster than torch.symeig.

Added benchmark showing speedup. True = workaround.
```
Benchmark                Avg Time(μs)      Peak Time(μs) Iterations
--------------------------------------------------------------------------------
normals_True_3000            16237           17233             31
normals_True_6000            33028           33391             16
normals_False_3000        18623069        18623069              1
normals_False_6000        36535475        36535475              1
```

Should help https://github.com/facebookresearch/pytorch3d/issues/988

Reviewed By: nikhilaravi

Differential Revision: D33660585

fbshipit-source-id: d1162b277f5d61ed67e367057a61f25e03888dce
2022-01-24 11:41:55 -08:00
Jeremy Reizenstein
5053142363 typing for unproject_points
Summary: Fix the base class annotation for unproject_points.

Reviewed By: patricklabatut

Differential Revision: D33281586

fbshipit-source-id: 1c34e8c4b30b359fcb9307507bc778ad3fecf290
2022-01-24 10:52:24 -08:00
Jeremy Reizenstein
67778caee8 avoid deprecated raysamplers
Summary: Migrate away from NDCGridRaysampler and GridRaysampler to their more flexible replacements.

Reviewed By: patricklabatut

Differential Revision: D33281584

fbshipit-source-id: 65f8702e700a32d38f7cd6bda3924bb1707a0633
2022-01-24 10:52:23 -08:00
Jeremy Reizenstein
3eb4233844 New raysamplers
Summary: New MultinomialRaysampler succeeds GridRaysampler bringing masking and subsampling. Correspondingly, NDCMultinomialRaysampler succeeds NDCGridRaysampler.

Reviewed By: nikhilaravi, shapovalov

Differential Revision: D33256897

fbshipit-source-id: cd80ec6f35b110d1d20a75c62f4e889ba8fa5d45
2022-01-24 10:52:23 -08:00
Jeremy Reizenstein
174738c33e safer pip install in doc
Summary: Add --no-cache and --no-index to all commands which try to download wheels from S3, to avoid hitting pypi.

Reviewed By: nikhilaravi

Differential Revision: D33507975

fbshipit-source-id: ee796e43cc1864e475cd73c248e9900487012f25
2022-01-21 06:28:32 -08:00
Jeremy Reizenstein
45d096e219 cameras_from_opencv_projection device #1021
Summary: Fix https://github.com/facebookresearch/pytorch3d/issues/1021 that cameras_from_opencv_projection always creates on CPU.

Reviewed By: nikhilaravi

Differential Revision: D33508211

fbshipit-source-id: fadebd45cacafd633af6a58094cf6f654529992c
2022-01-21 05:32:20 -08:00
Jeremy Reizenstein
39bb2ce063 Join cameras as batch
Summary:
Function to join a list of cameras objects into a single batched object.

FB: In the next diff I will remove the `concatenate_cameras` function in implicitron and update the callsites.

Reviewed By: nikhilaravi

Differential Revision: D33198209

fbshipit-source-id: 0c9f5f5df498a0def9dba756c984e6a946618158
2022-01-21 05:29:43 -08:00
Jeremy Reizenstein
9e2bc3a17f ambient lights batching #1043
Summary:
convert_to_tensors_and_broadcast had a special case for a single input, which is not used anywhere except fails to do the right thing if a TensorProperties has only one kwarg. At the moment AmbientLights may be the only way to hit the problem. Fix by removing the special case.

Fixes https://github.com/facebookresearch/pytorch3d/issues/1043

Reviewed By: nikhilaravi

Differential Revision: D33638345

fbshipit-source-id: 7a6695f44242e650504320f73b6da74254d49ac7
2022-01-20 09:44:38 -08:00
Jeremy Reizenstein
fddd6a700f drop builds for PyTorch 1.6.0
Summary: PyTorch 1.7.0 was in Oct 2020 and 1.7.1 was in Dec 2020. We shouldn't need older than them, maybe not even 1.7.0.

Reviewed By: patricklabatut

Differential Revision: D33507967

fbshipit-source-id: d3de09c20c44870cbe5522705f2293acc0e62af3
2022-01-10 10:04:02 -08:00
Jeremy Reizenstein
85cdcc252d builds for PyTorch 1.10.1
Summary: Adds 1.10.1 to the nightly builds

Reviewed By: patricklabatut

Differential Revision: D33507966

fbshipit-source-id: af88b155adbc4e3236107f709323bd46a1819013
2022-01-10 10:04:02 -08:00
Jeremy Reizenstein
fc4dd80208 initialize pointcloud from list containing Nones
Summary:
The following snippet should work in more cases.
     point_cloud = Pointclouds(
         [pcl.points_packed() for pcl in point_clouds],
         features=[pcl.features_packed() for pcl in point_clouds],
     )

We therefore allow features and normals inputs to be lists which contain some (but not all) Nones.

The initialization of a Pointclouds from empty data is also made a bit better now at working out how many feature channels there are.

Reviewed By: davnov134

Differential Revision: D31795089

fbshipit-source-id: 54bf941ba80672d699ffd5ac28927740e830f8ab
2022-01-07 05:54:44 -08:00
Jeremy Reizenstein
9640560541 test listing
Summary: Quick script to list tests to help completion of test command.

Reviewed By: patricklabatut

Differential Revision: D33279584

fbshipit-source-id: acb463106d311498449a14c1daf52434878722bf
2022-01-06 02:56:32 -08:00
Jeremy Reizenstein
6726500ad3 simple warning for bin overflow
Summary: Since coarse rasterization on cuda can overflow bins, we detect when this happens for memory safety. See https://github.com/facebookresearch/pytorch3d/issues/348 . Also try to print a warning.

Reviewed By: patricklabatut

Differential Revision: D33065604

fbshipit-source-id: 99b3c576d01b78e6d77776cf1a3e95984506c93a
2022-01-06 02:30:49 -08:00
Jeremy Reizenstein
d6a12afbe7 Pointclouds.subsample on Windows
Summary: Fix https://github.com/facebookresearch/pytorch3d/issues/1015. Stop relying on the fact that the dtype returned by np.random.choice (int64 on Linux, int32 on Windows) matches the dtype used by pytorch for indexing (int64 everywhere).

Reviewed By: patricklabatut

Differential Revision: D33428680

fbshipit-source-id: 716c857502cd54c563cb256f0eaca7dccd535c10
2022-01-06 02:22:44 -08:00
Jeremy Reizenstein
49f93b6388 remove Python3.6 builds
Summary: Python 3.6 was EOL on 2021-12-23.

Reviewed By: patricklabatut

Differential Revision: D33428708

fbshipit-source-id: 37a73898df49a4a49266839278fc8be56597405d
2022-01-05 07:25:02 -08:00
Jeremy Reizenstein
741777b5b5 More company name & License
Summary: Manual adjustments for license changes.

Reviewed By: patricklabatut

Differential Revision: D33405657

fbshipit-source-id: 8a21735726f3aece9f9164da9e3b272b27db8032
2022-01-04 11:43:38 -08:00
Jeremy Reizenstein
9eeb456e82 Update license for company name
Summary: Update all FB license strings to the new format.

Reviewed By: patricklabatut

Differential Revision: D33403538

fbshipit-source-id: 97a4596c5c888f3c54f44456dc07e718a387a02c
2022-01-04 11:43:38 -08:00
Pyre Bot Jr
7660ed1876 suppress errors in fbcode/vision - batch 2
Differential Revision: D33338085

fbshipit-source-id: fdb207864718c56dfa0d20530b59349c93af11bd
2021-12-28 11:28:48 -08:00
Nikhila Ravi
52c71b8816 Update Harmonic embedding in NeRF
Summary: Removed harmonic embedding function from projects/nerf and changed import to be from core pytorch3d.

Reviewed By: patricklabatut

Differential Revision: D33142358

fbshipit-source-id: 3004247d50392dbd04ea72e9cd4bace0dc03606b
2021-12-21 15:05:33 -08:00
Nikhila Ravi
f9a26a22fc Move Harmonic embedding to core pytorch3d
Summary:
Moved `HarmonicEmbedding` function in core PyTorch3D.
In the next diff will update the NeRF project.

Reviewed By: bottler

Differential Revision: D32833808

fbshipit-source-id: 0a12ccd1627c0ce024463c796544c91eb8d4d122
2021-12-21 15:05:33 -08:00
Nikhila Ravi
d67662d13c Update use of select_cameras
Summary: Removed `select_cameras.py` from implicitron and updated all callsites to directly index the cameras.

Reviewed By: bottler

Differential Revision: D33187605

fbshipit-source-id: aaf5b36aef9d72db0c7e89dec519f23646f6aa05
2021-12-21 05:46:38 -08:00
Nikhila Ravi
28ccdb7328 Enable __getitem__ for Cameras to return an instance of Cameras
Summary:
Added a custom `__getitem__` method to `CamerasBase` which returns an instance of the appropriate camera instead of the `TensorAccessor` class.

Long term we should deprecate the `TensorAccessor` and the `__getitem__` method on `TensorProperties`

FB: In the next diff I will update the uses of `select_cameras` in implicitron.

Reviewed By: bottler

Differential Revision: D33185885

fbshipit-source-id: c31995d0eb126981e91ba61a6151d5404b263f67
2021-12-21 05:46:38 -08:00
Jeremy Reizenstein
cc3259ba93 update linux wheel builds
Summary:
* Add PyTorch 1.10 + CUDA 11.1 combination.
* Change the CUDA 11.3 builds to happen in a separate docker image.
* Update connection to AWS to use the official `aws` commands instead of the wrapper which is now gone.

Reviewed By: patricklabatut

Differential Revision: D33235489

fbshipit-source-id: 56401f27c002a512ae121b3ec5911d020bfab885
2021-12-21 05:07:41 -08:00
Jeremy Reizenstein
b51be58f63 validate sampling_mode
Summary: New sampling mode option in TexturesUV mush match when collating meshes.

Reviewed By: patricklabatut

Differential Revision: D33235901

fbshipit-source-id: f457473d90bf022e65fe122ef45bf5efad134345
2021-12-21 04:48:14 -08:00
Jeremy Reizenstein
7449951850 force old mistune for website
Summary: When parsing tutorials for building the website, we fix a few libraries at old versions. They need mistune 0.8.4, not the new version 2+, so this gets added to the list of fixed-version libraries.

Reviewed By: patricklabatut

Differential Revision: D33236031

fbshipit-source-id: 2b152b64043edffc59fa909012eab5794c7e8844
2021-12-21 04:44:36 -08:00
Nikhila Ravi
262c1bfcd4 Join points as batch
Summary: Function to join a list of pointclouds as a batch similar to the corresponding function for Meshes.

Reviewed By: bottler

Differential Revision: D33145906

fbshipit-source-id: 160639ebb5065e4fae1a1aa43117172719f3871b
2021-12-21 04:44:36 -08:00
Jeremy Reizenstein
eb2bbf8433 screen space docstrings fix
Summary: Fix some comments to match the recent change to transform_points_screen.

Reviewed By: patricklabatut

Differential Revision: D33243697

fbshipit-source-id: dc8d182667a9413bca2c2e3657f97b2f7a47c795
2021-12-21 04:31:33 -08:00
Jeremy Reizenstein
1152a93b72 PyTorch>1.9 version str
Summary: Make code for downloading linux wheels robust to double-digit PyTorch minor version.

Reviewed By: nikhilaravi

Differential Revision: D33170562

fbshipit-source-id: 559a97cc98ff8411e235a9f9e29f84b7a400c716
2021-12-18 15:49:17 -08:00
Pyre Bot Jr
315f2487db suppress errors in vision/fair/pytorch3d
Differential Revision: D33202801

fbshipit-source-id: d4cb0f4f4a8ad5a6519ce4b8c640e8f96fbeaccb
2021-12-17 19:23:58 -08:00
Georgia Gkioxari
ccfb72cc50 small fix for iou3d
Summary:
A small numerical fix for IoU for 3D boxes, fixes GH #992

* Adds a check for boxes with zero side areas (invalid boxes)
* Fixes numerical issue when two boxes have coplanar sides

Reviewed By: nikhilaravi

Differential Revision: D33195691

fbshipit-source-id: 8a34b4d1f1e5ec2edb6d54143930da44bdde0906
2021-12-17 16:13:51 -08:00
Jeremy Reizenstein
069c9fd759 pytorch TORCH_CHECK_ARG version compatibility
Summary: Restore compatibility with old C++ after recent torch change. https://github.com/facebookresearch/pytorch3d/issues/995

Reviewed By: patricklabatut

Differential Revision: D33093174

fbshipit-source-id: 841202fb875d601db265e93dcf9cfa4249d02b25
2021-12-15 08:34:10 -08:00
CodemodService FBSourceClangFormatLinterBot
9eec430f1c Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D33090919

fbshipit-source-id: 78efa486776014a27f280a01a21f9e0af6742e3e
2021-12-14 08:06:14 -08:00
Peter Bell
f8fe9a2be1 Remove THGeneral (#69041)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69041

`TH_CONCAT_{N}` is still being used by THP so I've moved that into
it's own header but all the compiled code is gone.

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D32872477

Pulled By: ngimel

fbshipit-source-id: 06c82d8f96dbcee0715be407c61dfc7d7e8be47a
2021-12-13 16:14:17 -08:00
Jeremy Reizenstein
d049cd2e01 PyTorch 1.10 + CUDA 11.1 builds
Summary: Although the PyTorch website, which describes the current version 1.10, suggests CUDA 10.2 and 11.3 are supported, it would appear that we need to include builds for CUDA 11.1 to avoid surprises. This is because these builds are on anaconda, and this combination is used on Google Colab.

Reviewed By: nikhilaravi

Differential Revision: D33063932

fbshipit-source-id: 1b22d1f06e22bd18fb53ceecb58e78ac6a5d1693
2021-12-13 10:11:00 -08:00
Jeremy Reizenstein
1edc624d82 version 0.6.1
Summary: Update version number

Reviewed By: patricklabatut

Differential Revision: D33016833

fbshipit-source-id: ee3b0997887ab3bc5779503b13fa2014df41eaed
2021-12-13 04:38:16 -08:00
Jeremy Reizenstein
6ea6314792 [pytorch3d install.md for 0.6.1
Summary: Update references to dependencies

Reviewed By: patricklabatut

Differential Revision: D33016832

fbshipit-source-id: aa41c7ccc6acd19654303bc18bfd734dc29d88a3
2021-12-13 04:38:16 -08:00
Jeremy Reizenstein
093999e71f update tutorials for release
Summary: Pre 0.6.1 release, make the tutorials expect wheels with PyTorch 1.10.0

Reviewed By: patricklabatut

Differential Revision: D33016834

fbshipit-source-id: b8c5c1c6158f806c3e55ec668117fa762fa4b75f
2021-12-13 04:38:16 -08:00
Jeremy Reizenstein
a22b1e32a4 linux builds for PyTorch 1.10.0
Summary: Build the wheels with latest PyTorch

Reviewed By: patricklabatut

Differential Revision: D33016835

fbshipit-source-id: 0ec42f31f1e4d4055562f18790f929b34bb13c52
2021-12-13 04:38:16 -08:00
CodemodService FBSourceClangFormatLinterBot
9c9d9440f9 Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D32975574

fbshipit-source-id: 66856595c7bc29921f24a2c5c00c72892f262aa1
2021-12-09 00:10:21 -08:00
Peter Bell
c65af9ef5a Remove remaining THC code (#69039)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69039

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D32872476

Pulled By: ngimel

fbshipit-source-id: 7972aacc24aef9450fb59b707ed6396c501bcb31
2021-12-08 12:15:58 -08:00
Jeremy Reizenstein
70acb3e415 new tests demonstrating pixel matching
Summary: Demonstrate current behavior of pixels with new tests of all renderers.

Reviewed By: gkioxari

Differential Revision: D32651141

fbshipit-source-id: 3ca30b4274ed2699bc5e1a9c6437eb3f0b738cbf
2021-12-07 15:04:20 -08:00
Jeremy Reizenstein
bf3bc6f8e3 screen cameras lose -1
Summary:
All the renderers in PyTorch3D (pointclouds including pulsar, meshes, raysampling) use align_corners=False style. NDC space goes between the edges of the outer pixels. For a non square image with W>H, the vertical NDC space goes from -1 to 1 and the horizontal from -W/H to W/H.

However it was recently pointed out that functionality which deals with screen space inside the camera classes is inconsistent with this. It unintentionally uses align_corners=True. This fixes that.

This would change behaviour of the following:
- If you create a camera in screen coordinates, i.e. setting in_ndc=False, then anything you do with the camera which touches NDC space may be affected, including trying to use renderers. The transform_points_screen function will not be affected...
- If you call the function “transform_points_screen” on a camera defined in NDC space results will be different. I have illustrated in the diff how to get the old results from the new results but this probably isn’t the right long-term solution..

Reviewed By: gkioxari

Differential Revision: D32536305

fbshipit-source-id: 377325a9137282971dcb7ca11a6cba3fc700c9ce
2021-12-07 15:04:20 -08:00
Jeremy Reizenstein
cff4876131 add from_ndc to unproject_points
Summary: Give unproject_points an argument letting it bypass screen space. use it to let the raysampler work for cameras defined in screen space.

Reviewed By: gkioxari

Differential Revision: D32596600

fbshipit-source-id: 2fe585dcd138cdbc65dd1c70e1957fd894512d3d
2021-12-07 15:04:20 -08:00
Jeremy Reizenstein
a0e2d2e3c3 move benchmarks to separate directory
Summary: Move benchmarks to a separate directory as tests/ is getting big.

Reviewed By: nikhilaravi

Differential Revision: D32885462

fbshipit-source-id: a832662a494ee341ab77d95493c95b0af0a83f43
2021-12-07 10:26:50 -08:00
Roman Shapovalov
a6508ac3df Fix: Pointclouds.inside_box reducing over spatial dimensions.
Summary: As subj. Tests corrected accordingly. Also changed the test to provide a bit better diagnostics.

Reviewed By: bottler

Differential Revision: D32879498

fbshipit-source-id: 0a852e4a13dcb4ca3e54d71c6b263c5d2eeaf4eb
2021-12-06 07:45:46 -08:00
Ana Dodik
d9f709599b Adding the option to choose the texture sampling mode in TexturesUV.
Summary:
This diff adds the `sample_mode` parameter to `TexturesUV` to control the interpolation mode during texture sampling. It simply gets forwarded to `torch.nn.funcitonal.grid_sample`.

This option was requested in this [GitHub issue](https://github.com/facebookresearch/pytorch3d/issues/805).

Reviewed By: patricklabatut

Differential Revision: D32665185

fbshipit-source-id: ac0bc66a018bd4cb20d75fec2d7c11145dd20199
2021-11-29 07:01:28 -08:00
Patrick Labatut
e4456dba2f Facebook -> Meta Platforms on website footer + docs
Summary: Update company copyright on website footer + documentation pages, see [guidelines](https://www.internalfb.com/intern/wiki/Open_Source/Launch_an_OSS_Project/Launch_Preparation/Automated_Checkup/Terms_Of_Use_&_Privacy_Policy/).

Reviewed By: bottler

Differential Revision: D32649563

fbshipit-source-id: f285be79c185496832c5d41b839ee974234a8fa5
2021-11-24 10:07:15 -08:00
Jeremy Reizenstein
7fa333f632 Fix some Transform3D -> Transform3d
Summary: Fix some typos in comments.

Reviewed By: patricklabatut

Differential Revision: D32596645

fbshipit-source-id: 09b6d8c49f4f0301b80df626c6f9a2d5b5d9b1a7
2021-11-23 11:31:11 -08:00
Jeremy Reizenstein
a0247ea6bd pulsar image_size validation
Summary:
For a non-square image, the image_size in PointsRasterizationSettings is now (H,W) not (W,H). A part of pulsar's validation code wasn't updated for this.

The following now works.
```
H, W = 249, 125
image_size = (H, W)
camera = PerspectiveCameras(focal_length=1.0, image_size=(image_size,), in_ndc=True)
points_rasterizer = PointsRasterizer(cameras=camera, raster_settings=PointsRasterizationSettings(image_size=image_size, radius=0.0000001))
pulsar_renderer = PulsarPointsRenderer(rasterizer=points_rasterizer)
pulsar_renderer(Pointclouds(...), gamma = (0.1,), znear = (0.1,), zfar = (70,))
```

Reviewed By: nikhilaravi, classner

Differential Revision: D32316322

fbshipit-source-id: 8405a49acecb1c95d37ee368c3055868b797208a
2021-11-17 15:18:01 -08:00
Pyre Bot Jr
a8cb7fa862 suppress errors in fbcode/vision - batch 2
Differential Revision: D32509948

fbshipit-source-id: 762ad27c7e6c76c30eb97fd44f1739295f63b98b
2021-11-17 14:38:11 -08:00
Jeremy Reizenstein
7ce18f38cd TexturesAtlas in plotly
Summary:
Lets a K=1 textures atlas be viewed in plotly. Fixes https://github.com/facebookresearch/pytorch3d/issues/916 .

Test: Now get colored faces in
```
import torch
from pytorch3d.utils import ico_sphere
from pytorch3d.vis.plotly_vis import plot_batch_individually
from pytorch3d.renderer import TexturesAtlas

b = ico_sphere()
face_colors = torch.rand(b.faces_padded().shape)
tex = TexturesAtlas(face_colors[:,:,None,None])
b.textures=tex
plot_batch_individually(b)
```

Reviewed By: gkioxari

Differential Revision: D32190470

fbshipit-source-id: 258d30b7e9d79751a79db44684b5540657a2eff5
2021-11-11 02:15:33 -08:00
Jeremy Reizenstein
5fbdb99aec builds for pytorch 1.10.0
Summary:
Add builds corresponding to the new pytorch 1.10.0. We omit CUDA 11.3 testing because it fails with current hardware, and omit the main build too for the moment.

Also move to the newer GPU circle CI executors.

Reviewed By: gkioxari

Differential Revision: D32335934

fbshipit-source-id: 416d92a8eecd06ef7fc742664a5f2d46f93415f8
2021-11-11 02:03:37 -08:00
Pyre Bot Jr
1836c786fe suppress errors in vision/fair/pytorch3d
Differential Revision: D32310496

fbshipit-source-id: fa1809bbe0b999baee6d07fad3890dc8c2a2157b
2021-11-10 02:52:00 -08:00
Ignacio Rocco
cac6cb1b78 Update NDC raysampler for non-square convention (#29)
Summary:
- Old NDC convention had xy coords in [-1,1]x[-1,1]
- New NDC convention has xy coords in [-1, 1]x[-u, u] or [-u, u]x[-1, 1]

where u > 1 is the aspect ratio of the image.

This PR fixes the NDC raysampler to use the new convention.

Partial fix for https://github.com/facebookresearch/pytorch3d/issues/868

Pull Request resolved: https://github.com/fairinternal/pytorch3d/pull/29

Reviewed By: davnov134

Differential Revision: D31926148

Pulled By: bottler

fbshipit-source-id: c6c42c60d1473b04e60ceb49c8c10951ddf03c74
2021-11-05 10:36:19 -07:00
Jeremy Reizenstein
bfeb82efa3 some pointcloud typing
Summary: Make clear that features_padded() etc can return None

Reviewed By: patricklabatut

Differential Revision: D31795088

fbshipit-source-id: 7b0bbb6f3b7ad7f7b6e6a727129537af1d1873af
2021-10-28 04:54:20 -07:00
Jeremy Reizenstein
73a14d7266 dataparallel fix
Summary: Attempt to overcome flaky test

Reviewed By: patricklabatut

Differential Revision: D31895560

fbshipit-source-id: 1ecbb1782b0eafe132f88425c48487c2d0e10d2d
2021-10-26 14:35:30 -07:00
una-dinosauria
bee31c48d3 Make some matrix conversion jittable (#898)
Summary:
Make sure the functions from `rotation_conversion` are jittable, and add some type hints.

Add tests to verify this is the case.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/898

Reviewed By: patricklabatut

Differential Revision: D31926103

Pulled By: bottler

fbshipit-source-id: bff6013c5ca2d452e37e631bd902f0674d5ca091
2021-10-26 14:31:46 -07:00
RWL
29417d1f9b NaN (divide by zero) fix for issue #561 and #790 (#891)
Summary:
https://github.com/facebookresearch/pytorch3d/issues/561
https://github.com/facebookresearch/pytorch3d/issues/790
Divide by zero fix (NaN fix).  When perspective_correct=True, BarycentricPerspectiveCorrectionForward and BarycentricPerspectiveCorrectionBackward in ../csrc/utils/geometry_utils.cuh are called.  The denominator (denom) values should not be allowed to go to zero. I'm able to resolve this issue locally with this PR and submit it for the team's review.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/891

Reviewed By: patricklabatut

Differential Revision: D31829695

Pulled By: bottler

fbshipit-source-id: a3517b8362f6e60d48c35731258d8ce261b1d912
2021-10-22 04:52:06 -07:00
Peter Bell
57b9c729b8 Remove THCGeneral.cpp (#66766)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66766

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D31721647

Pulled By: ngimel

fbshipit-source-id: 5033a2800871c8745a1a92e379c9f97c98af212e
2021-10-19 16:08:39 -07:00
Pyre Bot Jr
7c111f7379 suppress errors in vision/fair/pytorch3d
Differential Revision: D31737477

fbshipit-source-id: 2590548c1b7a65c277ccddd405276c244fde0961
2021-10-18 12:18:08 -07:00
Jeremy Reizenstein
3953de47ee remove torch from cuda
Summary: Keep using at:: instead of torch:: so we don't need torch/extension.h and can keep other compilers happy.

Reviewed By: patricklabatut

Differential Revision: D31688436

fbshipit-source-id: 1825503da0104acaf1558d17300c02ef663bf538
2021-10-18 03:38:11 -07:00
Jeremy Reizenstein
1a7442a483 windows compatibility
Summary: Few tweaks to make CUDA build on windows happier, as remarked in #876.

Reviewed By: patricklabatut

Differential Revision: D31688188

fbshipit-source-id: 20816d6215f2e3ec898f81ae4221b1c2ff24b64f
2021-10-18 03:38:11 -07:00
Ignacio Rocco
16ebf54e69 NDC doc fix (#28)
Summary:
- Added clarifications about NDC coordinate system for square and non-square images.

Pull Request resolved: https://github.com/fairinternal/pytorch3d/pull/28

Reviewed By: nikhilaravi

Differential Revision: D31681444

Pulled By: bottler

fbshipit-source-id: f71eabe9b3dd54b9372cef617e08f837f316555b
2021-10-17 07:41:57 -07:00
Jeremy Reizenstein
14dd2611ee Remove version number from docs title
Summary: Small docs fixes: spelling. Avoid things which get out of date quickly: year, version.

Reviewed By: patricklabatut

Differential Revision: D31659927

fbshipit-source-id: b0111140bdaf3c6cadc09f70621bf5656909ca02
2021-10-16 15:06:23 -07:00
Jeremy Reizenstein
34b1b4ab8b defaulted grid_sizes in points2vols
Summary: Fix #873, that grid_sizes defaults to the wrong dtype in points2volumes code, and mask doesn't have a proper default.

Reviewed By: nikhilaravi

Differential Revision: D31503545

fbshipit-source-id: fa32a1a6074fc7ac7bdb362edfb5e5839866a472
2021-10-16 14:41:59 -07:00
Nikhila Ravi
2f2466f472 Update eps for coplanar check in 3D IoU
Summary: Make eps=1e-4 by default for coplanar check and also enable it to be set by the user in call to `box3d_overlap`.

Reviewed By: gkioxari

Differential Revision: D31596836

fbshipit-source-id: b57fe603fd136cfa58fddf836922706d44fe894e
2021-10-13 13:29:47 -07:00
Jeremy Reizenstein
53d99671bd remove PyTorch 1.5 builds
Summary: PyTorch 1.6.0 came out on 28 Jul 2020. Stop builds for 1.5.0 and 1.5.1. Also update the news section of the README for recent releases.

Reviewed By: nikhilaravi

Differential Revision: D31442830

fbshipit-source-id: 20bdd8a07090776d0461240e71c6536d874615f6
2021-10-11 06:13:01 -07:00
Pyre Bot Jr
6d36c1e2b0 suppress errors in vision/fair/pytorch3d
Differential Revision: D31496551

fbshipit-source-id: 705fd88f319875db3f7938a2946c48a51ea225f5
2021-10-07 21:58:08 -07:00
Nikhila Ravi
6dfa326922 IOU box3d epsilon fix
Summary: The epsilon value is important for determining whether vertices are inside/outside a plane.

Reviewed By: gkioxari

Differential Revision: D31485247

fbshipit-source-id: 5517575de7c02f1afa277d00e0190a81f44f5761
2021-10-07 18:42:09 -07:00
Jeremy Reizenstein
b26f4bc33a test tolerance loosenings
Summary: Increase some test tolerances so that they pass in more situations, and re-enable two tests.

Reviewed By: nikhilaravi

Differential Revision: D31379717

fbshipit-source-id: 06a25470cc7b6d71cd639d9fd7df500d4b84c079
2021-10-07 10:48:12 -07:00
Ruilong Li
8fa438cbda Fix camera conversion between opencv and pytorch3d
Summary:
For non square image, the NDC space in pytorch3d is not square [-1, 1]. Instead, it is [-1, 1] for the smallest side, and [-u, u] for the largest side, where u > 1. This behavior is followed by the pytorch3d renderer.

See the function `get_ndc_to_screen_transform` for a example.

Without this fix, the rendering result is not correct using the converted pytorch3d-camera from a opencv-camera on non square images.

This fix also helps the `transform_points_screen` function delivers consistent results with opencv projection for the converted pytorch3d-camera.

Reviewed By: classner

Differential Revision: D31366775

fbshipit-source-id: 8858ae7b5cf5c0a4af5a2af40a1358b2fe4cf74b
2021-10-07 10:15:31 -07:00
CodemodService Bot
815a93ce89 Daily arc lint --take BLACK
Reviewed By: zertosh

Differential Revision: D31464988

fbshipit-source-id: 2eaf28d6869ccb70fd4df4f7de15d959cdaba0be
2021-10-06 21:19:23 -07:00
Jeremy Reizenstein
23ef666db1 build website in docker container
Summary: Do the website building in a docker container to avoid worrying about dependencies.

Reviewed By: nikhilaravi

Differential Revision: D30223892

fbshipit-source-id: 77b7b4630188167316891381f6ca9e9fbe7f0a05
2021-10-06 18:09:45 -07:00
Nikita Smetanin
d7d740abe9 Symmetric eigen 3x3 implementation + benchmark & tests
Summary:
Symmetric eigenvalues 3x3 implementation from https://github.com/fairinternal/denseposeslim/blob/roman_c3dpo/tools/functions.py#L612

based on https://en.wikipedia.org/wiki/Eigenvalue_algorithm#3.C3.973_matrices and https://www.geometrictools.com/Documentation/RobustEigenSymmetric3x3.pdf

Benchmarks show significant outperformance of symeig3x3 in comparison with torch implementations (torch.symeig and torch.linalg.eigh) on GPU (P100), especially for large batches: 70-280ns per sample vs 3400ns per sample for torch_linalg_eigh_1048576_cpu

It's worth mentioning that torch.linalg.eigh is still comparably fast for batches up to 8192 on CPU.

Some tests are still failing as the error thresholds need to be adjusted appropriately.

Reviewed By: patricklabatut

Differential Revision: D29915453

fbshipit-source-id: 7c1b062da631c57c4e22a42dd0027ea5e205f1b5
2021-10-06 10:57:07 -07:00
Jeremy Reizenstein
9585a58d10 version number 0.6.0
Summary: update

Reviewed By: patricklabatut

Differential Revision: D31338002

fbshipit-source-id: 90ed6c2ea411c0384dd233ee88e51b5f608eef88
2021-10-05 16:25:25 -07:00
Jeremy Reizenstein
364a7dcaf4 Install.md for next release.
Summary: now supporting PyTorch 1.9.1

Reviewed By: patricklabatut

Differential Revision: D31338001

fbshipit-source-id: 11140819d10af388d31905a39f1da136cf9c5ff2
2021-10-05 16:25:25 -07:00
Georgia Gkioxari
1360d69ffb minor note fix
Summary: A small fix for the iou3d note

Reviewed By: bottler

Differential Revision: D31370686

fbshipit-source-id: 6c97302b5c78de52915f31be70f234179c4b246d
2021-10-03 17:17:47 -07:00
Jeremy Reizenstein
4281df19ce subsample pointclouds
Summary: New function to randomly subsample Pointclouds to a maximum size.

Reviewed By: nikhilaravi

Differential Revision: D30936533

fbshipit-source-id: 789eb5004b6a233034ec1c500f20f2d507a303ff
2021-10-02 13:40:16 -07:00
Jeremy Reizenstein
ee2b2feb98 Use C++/CUDA in points2vols
Summary:
Move the core of add_points_to_volumes to the new C++/CUDA implementation. Add new flag to let the user stop this happening. Avoids copies. About a 30% speedup on the larger cases, up to 50% on the smaller cases.

New timings
```
Benchmark                                                               Avg Time(μs)      Peak Time(μs) Iterations
--------------------------------------------------------------------------------
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[25, 25, 25]_1000                     4575           12591            110
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[25, 25, 25]_10000                   25468           29186             20
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[25, 25, 25]_100000                 202085          209897              3
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[101, 111, 121]_1000                 46059           48188             11
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[101, 111, 121]_10000                83759           95669              7
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[101, 111, 121]_100000              326056          339393              2
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[25, 25, 25]_1000                       2379            4738            211
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[25, 25, 25]_10000                     12100           63099             42
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[25, 25, 25]_100000                    63323           63737              8
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[101, 111, 121]_1000                   45216           45479             12
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[101, 111, 121]_10000                  57205           58524              9
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[101, 111, 121]_100000                139499          139926              4
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[25, 25, 25]_1000                   40129           40431             13
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[25, 25, 25]_10000                 204949          239293              3
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[25, 25, 25]_100000               1664541         1664541              1
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[101, 111, 121]_1000               391573          395108              2
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[101, 111, 121]_10000              674869          674869              1
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[101, 111, 121]_100000            2713632         2713632              1
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[25, 25, 25]_1000                     12726           13506             40
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[25, 25, 25]_10000                    73103           73299              7
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[25, 25, 25]_100000                  598634          598634              1
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[101, 111, 121]_1000                 398742          399256              2
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[101, 111, 121]_10000                543129          543129              1
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[101, 111, 121]_100000              1242956         1242956              1
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[25, 25, 25]_1000                  1814            8884            276
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[25, 25, 25]_10000                 1996            8851            251
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[25, 25, 25]_100000                4608           11529            109
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[101, 111, 121]_1000               5183           12508             97
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[101, 111, 121]_10000              7106           14077             71
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[101, 111, 121]_100000            25914           31818             20
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[25, 25, 25]_1000                    1778            8823            282
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[25, 25, 25]_10000                   1825            8613            274
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[25, 25, 25]_100000                  3154           10161            159
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[101, 111, 121]_1000                 4888            9404            103
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[101, 111, 121]_10000                5194            9963             97
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[101, 111, 121]_100000               8109           14933             62
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[25, 25, 25]_1000                 3320           10306            151
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[25, 25, 25]_10000                7003            8595             72
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[25, 25, 25]_100000              49140           52957             11
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[101, 111, 121]_1000             35890           36918             14
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[101, 111, 121]_10000            58890           59337              9
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[101, 111, 121]_100000          286878          287600              2
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[25, 25, 25]_1000                   2484            8805            202
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[25, 25, 25]_10000                  3967            9090            127
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[25, 25, 25]_100000                19423           19799             26
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[101, 111, 121]_1000               33228           33329             16
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[101, 111, 121]_10000              37292           37370             14
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[101, 111, 121]_100000             73550           74017              7
--------------------------------------------------------------------------------
```
Previous timings
```
Benchmark                                                               Avg Time(μs)      Peak Time(μs) Iterations
--------------------------------------------------------------------------------
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[25, 25, 25]_1000                    10100           46422             50
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[25, 25, 25]_10000                   28442           32100             18
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[25, 25, 25]_100000                 241127          254269              3
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[101, 111, 121]_1000                 54149           79480             10
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[101, 111, 121]_10000               125459          212734              4
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[101, 111, 121]_100000              512739          512739              1
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[25, 25, 25]_1000                       2866           13365            175
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[25, 25, 25]_10000                      7026           12604             72
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[25, 25, 25]_100000                    48822           55607             11
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[101, 111, 121]_1000                   38098           38576             14
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[101, 111, 121]_10000                  48006           54120             11
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[101, 111, 121]_100000                131563          138536              4
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[25, 25, 25]_1000                   64615           91735              8
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[25, 25, 25]_10000                 228815          246095              3
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[25, 25, 25]_100000               3086615         3086615              1
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[101, 111, 121]_1000               464298          465292              2
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[101, 111, 121]_10000             1053440         1053440              1
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[101, 111, 121]_100000            6736236         6736236              1
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[25, 25, 25]_1000                     11940           12440             42
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[25, 25, 25]_10000                    56641           58051              9
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[25, 25, 25]_100000                  711492          711492              1
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[101, 111, 121]_1000                 326437          329846              2
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[101, 111, 121]_10000                418514          427911              2
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[101, 111, 121]_100000              1524285         1524285              1
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[25, 25, 25]_1000                  5949           13602             85
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[25, 25, 25]_10000                 5817           13001             86
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[25, 25, 25]_100000               23833           25971             21
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[101, 111, 121]_1000               9029           16178             56
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[101, 111, 121]_10000             11595           18601             44
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[101, 111, 121]_100000            46986           47344             11
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[25, 25, 25]_1000                    2554            9747            196
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[25, 25, 25]_10000                   2676            9537            187
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[25, 25, 25]_100000                  6567           14179             77
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[101, 111, 121]_1000                 5840           12811             86
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[101, 111, 121]_10000                6102           13128             82
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[101, 111, 121]_100000              11945           11995             42
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[25, 25, 25]_1000                 7642           13671             66
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[25, 25, 25]_10000               25190           25260             20
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[25, 25, 25]_100000             212018          212134              3
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[101, 111, 121]_1000             40421           45692             13
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[101, 111, 121]_10000            92078           92132              6
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[101, 111, 121]_100000          457211          457229              2
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[25, 25, 25]_1000                   3574           10377            140
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[25, 25, 25]_10000                  7222           13023             70
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[25, 25, 25]_100000                48127           48165             11
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[101, 111, 121]_1000               34732           35295             15
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[101, 111, 121]_10000              43050           51064             12
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[101, 111, 121]_100000            106028          106058              5
--------------------------------------------------------------------------------
```

Reviewed By: nikhilaravi

Differential Revision: D29548609

fbshipit-source-id: 7026e832ea299145c3f6b55687f3c1601294f5c0
2021-10-01 11:58:24 -07:00
Jeremy Reizenstein
9ad98c87c3 Cuda function for points2vols
Summary: Added CUDA implementation to match the new, still unused, C++ function for the core of points2vols.

Reviewed By: nikhilaravi

Differential Revision: D29548608

fbshipit-source-id: 16ebb61787fcb4c70461f9215a86ad5f97aecb4e
2021-10-01 11:58:24 -07:00
Jeremy Reizenstein
0dfc6e0eb8 CPU function for points2vols
Summary: Single C++ function for the core of points2vols, not used anywhere yet. Added ability to control align_corners and the weight of each point, which may be useful later.

Reviewed By: nikhilaravi

Differential Revision: D29548607

fbshipit-source-id: a5cda7ec2c14836624e7dfe744c4bbb3f3d3dfe2
2021-10-01 11:58:24 -07:00
Jeremy Reizenstein
c7c6deab86 compatibility statement in README
Summary: Statement about compatibility.

Reviewed By: nikhilaravi

Differential Revision: D30697072

fbshipit-source-id: aeb5e3e0a08c1797033d8c00b24484c8a699cb02
2021-09-30 10:50:11 -07:00
Jeremy Reizenstein
4ad8576541 rasterization header comment fixes
Summary: Fix some missing or misplaced argument descriptions.

Reviewed By: nikhilaravi

Differential Revision: D31305132

fbshipit-source-id: af4fcee9766682b2b7f7f16327e839090e377be2
2021-09-30 10:41:50 -07:00
Simon Moisselin
a5cbb624c1 Fix typo in chamfer loss docstring (#862)
Summary:
y_lengths is about `y`, not `x`.

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/862

Reviewed By: bottler

Differential Revision: D31304434

Pulled By: patricklabatut

fbshipit-source-id: 1db4cd57677fc018c229e02172f95ffa903d75eb
2021-09-30 05:10:18 -07:00
Theo-Cheynel
720bdf60f5 Removed typos 'f' from the f-string error messages (#851)
Summary:
Changed mistake in Python f-strings causing an additional letter "f" to appear in the error messages.
The error messages would read something like :
```
raise ValueError(f"Invalid rotation matrix  shape f{matrix.shape}.")
ValueError: Invalid rotation matrix  shape ftorch.Size([4, 4]).
```
(with an additional f, probably a mistake)

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/851

Reviewed By: nikhilaravi

Differential Revision: D31238831

Pulled By: patricklabatut

fbshipit-source-id: 0ba3e61e488e467e997954278097889be606d4f8
2021-09-30 03:26:14 -07:00
Jeremy Reizenstein
1aab192706 Linter when only python3 exists
Reviewed By: nikhilaravi

Differential Revision: D31289856

fbshipit-source-id: 5a522a69537a873bacacf2a178e5f30771aef35f
2021-09-30 00:55:38 -07:00
Jeremy Reizenstein
dd76b41014 save colors as uint8 in PLY
Summary: Allow saving colors as 8bit when writing .ply files.

Reviewed By: patricklabatut, nikitos9000

Differential Revision: D30905312

fbshipit-source-id: 44500982c9ed6d6ee901e04f9623e22792a0e7f7
2021-09-30 00:48:52 -07:00
Georgia Gkioxari
1b1ba5612f Note for iou3d
Summary:
A note for our new algorithm for IoU of oriented 3D boxes. It includes
* A description of the algorithm
* A comparison with Objectron

Reviewed By: nikhilaravi

Differential Revision: D31288066

fbshipit-source-id: 0ea8da887bc5810bf4a3e0848223dd3590df1538
2021-09-29 19:15:19 -07:00
Nikhila Ravi
ff8d4762f4 (new) CUDA IoU for 3D boxes
Summary: CUDA implementation of 3D bounding box overlap calculation.

Reviewed By: gkioxari

Differential Revision: D31157919

fbshipit-source-id: 5dc89805d01fef2d6779f00a33226131e39c43ed
2021-09-29 18:49:09 -07:00
Nikhila Ravi
53266ec9ff C++ IoU for 3D Boxes
Summary: C++ Implementation of algorithm to compute 3D bounding boxes for batches of bboxes of shape (N, 8, 3) and (M, 8, 3).

Reviewed By: gkioxari

Differential Revision: D30905190

fbshipit-source-id: 02e2cf025cd4fa3ff706ce5cf9b82c0fb5443f96
2021-09-29 17:03:43 -07:00
Nikhila Ravi
2293f1fed0 IoU for 3D boxes
Summary:
I have implemented an exact solution for 3D IoU of oriented 3D boxes.

This file includes:
* box3d_overlap: which computes the exact IoU of box1 and box2
* box3d_overlap_sampling: which computes an approximate IoU of box1 and box2 by sampling points within the boxes

Note that both implementations currently do not support batching.

Our exact IoU implementation is based on the fact that the intersecting shape of the two 3D boxes will be formed by segments of the surface of the boxes. Our algorithm computes these segments by reasoning whether triangles of one box are within the second box and vice versa. We deal with intersecting triangles by clipping them.

Reviewed By: gkioxari

Differential Revision: D30667497

fbshipit-source-id: 2f747f410f90b7f854eeaf3036794bc3ac982917
2021-09-29 13:44:10 -07:00
Pyre Bot Jr
5b89c4e3bb suppress errors in vision/fair/pytorch3d
Differential Revision: D31266959

fbshipit-source-id: 878a59ca2cfe1389e42fc338653e8d3314b56b91
2021-09-29 05:07:37 -07:00
Jeremy Reizenstein
d0ca3b9e0c builds for PyTorch 1.9.1
Summary: Add conda builds for the newly released PyTorch version 1.9.1.

Reviewed By: patricklabatut

Differential Revision: D31140206

fbshipit-source-id: 697549a3ef0db8248f4f9b5c00cf1407296b5022
2021-09-27 04:17:13 -07:00
Jeremy Reizenstein
9a737da83c More renderer parameter descriptions
Summary:
Copy some descriptions of renderer parameters to more places so they are easier to find.

Also a couple of small corrections, and make RasterizationSettings a dataclass.

Reviewed By: nikhilaravi, patricklabatut

Differential Revision: D30899822

fbshipit-source-id: 805cf366acb7d51cb308fa574deff0657c199673
2021-09-24 09:59:24 -07:00
Jeremy Reizenstein
860b742a02 deterministic rasterization
Summary: Attempt to fix #659, an observation that the rasterizer is nondeterministic, by resolving tied faces by picking those with lower index.

Reviewed By: nikhilaravi, patricklabatut

Differential Revision: D30699039

fbshipit-source-id: 39ed797eb7e9ce7370ae71259ad6b757f9449923
2021-09-23 06:59:48 -07:00
Jeremy Reizenstein
cb170ac024 Avoid torch/extension.h in cuda
Summary: Unlike other cu files, sigmoid_alpha_blend uses torch/extension.h. Avoid for possible build speed win and because of a reported problem #843 on windows with CUDA 11.4.

Reviewed By: nikhilaravi

Differential Revision: D31054121

fbshipit-source-id: 53a1f985a1695a044dfd2ee1a5b0adabdf280595
2021-09-22 15:54:59 -07:00
Jeremy Reizenstein
fe5bfa5994 rename cpp to avoid clash
Summary: Rename sample_farthest_point.cpp to not match its CUDA equivalent.

Reviewed By: nikhilaravi

Differential Revision: D31006645

fbshipit-source-id: 135b511cbde320d2b3e07fc5b027971ef9210aa9
2021-09-22 15:54:59 -07:00
Jeremy Reizenstein
dbfb3a910a remove __restrict__ in cpp
Summary: Remove use of nonstandard C++. Noticed on windows in issue https://github.com/facebookresearch/pytorch3d/issues/843. (We use `__restrict__` in CUDA, where it is fine, even on windows)

Reviewed By: nikhilaravi

Differential Revision: D31006516

fbshipit-source-id: 929ba9b3216cb70fad3ffa3274c910618d83973f
2021-09-22 15:54:59 -07:00
Pyre Bot Jr
526df446c6 suppress errors in vision/fair/pytorch3d
Differential Revision: D31042748

fbshipit-source-id: fffb983bd6765d306a407587ddf64e68e57e9ecc
2021-09-18 12:24:58 -07:00
Nikhila Ravi
bd04ffaf77 Farthest point sampling CUDA
Summary:
CUDA implementation of farthest point sampling algorithm.

## Visual comparison

Compared to random sampling, farthest point sampling gives better coverage of the shape.

{F658631262}

## Reduction

Parallelized block reduction to find the max value at each iteration happens as follows:

1. First split the points into two equal sized parts (e.g. for a list with 8 values):
`[20, 27, 6, 8 | 11, 10, 2, 33]`
2. Use half of the thread (4 threads) to compare pairs of elements from each half (e.g elements [0, 4], [1, 5] etc) and store the result in the first half of the list:
`[20, 27, 6, 33 | 11, 10, 2, 33]`
Now we no longer care about the second part but again divide the first part into two
`[20, 27 | 6, 33| -, -, -, -]`
Now we can use 2 threads to compare the 4 elements
4. Finally we have gotten down to a single pair
`[20 | 33 | -, - | -, -, -, -]`
Use 1 thread to compare the remaining two elements
5. The max will now be at thread id = 0
`[33 | - | -, - | -, -, -, -]`
The reduction will give the farthest point for the selected batch index at this iteration.

Reviewed By: bottler, jcjohnson

Differential Revision: D30401803

fbshipit-source-id: 525bd5ae27c4b13b501812cfe62306bb003827d2
2021-09-15 13:49:22 -07:00
Nikhila Ravi
d9f7611c4b Farthest point sampling C++
Summary: C++ implementation of iterative farthest point sampling.

Reviewed By: jcjohnson

Differential Revision: D30349887

fbshipit-source-id: d25990f857752633859fe00283e182858a870269
2021-09-15 13:49:21 -07:00
Nikhila Ravi
3b7d78c7a7 Farthest point sampling python naive
Summary:
This is a naive python implementation of the iterative farthest point sampling algorithm along with associated simple tests. The C++/CUDA implementations will follow in subsequent diffs.

The algorithm is used to subsample a pointcloud with better coverage of the space of the pointcloud.

The function has not been added to `__init__.py`. I will add this after the full C++/CUDA implementations.

Reviewed By: jcjohnson

Differential Revision: D30285716

fbshipit-source-id: 33f4181041fc652776406bcfd67800a6f0c3dd58
2021-09-15 13:49:21 -07:00
Jeremy Reizenstein
a0d76a7080 join_scene fix for TexturesUV
Summary: Fix issue #826. This is a correction to the joining of TexturesUV into a single scene.

Reviewed By: nikhilaravi

Differential Revision: D30767092

fbshipit-source-id: 03ba6a1d2f22e569d1b3641cd13ddbb8dcb87ec7
2021-09-13 07:08:58 -07:00
Shangchen Han
46f727cb68 make so3_log_map torch script compatible
Summary:
* HAT_INV_SKEW_SYMMETRIC_TOL was a global variable and torch script gives an error when compiling that function. Move it to the function scope.
* torch script gives error when compiling acos_linear_extrapolation because bound is a union of tuple and float. The tuple version is kept in this diff.

Reviewed By: patricklabatut

Differential Revision: D30614916

fbshipit-source-id: 34258d200dc6a09fbf8917cac84ba8a269c00aef
2021-09-10 11:13:26 -07:00
Jeremy Reizenstein
c3d7808868 register_buffer compatibility
Summary: In D30349234 (1b8d86a104) we introduced persistent=False to some register_buffer calls, which depend on PyTorch 1.6. We go back to the old behaviour for PyTorch 1.5.

Reviewed By: nikhilaravi

Differential Revision: D30731327

fbshipit-source-id: ab02ef98ee87440ef02479b72f4872b562ab85b5
2021-09-09 07:37:57 -07:00
Justin Johnson
bbc7573261 Unify coarse rasterization for points and meshes
Summary:
There has historically been a lot of duplication between the coarse rasterization logic for point clouds and meshes. This diff factors out the shared logic, so coarse rasterization of point clouds and meshes share the same core logic.

Previously the only difference between the coarse rasterization kernels for points and meshes was the logic for checking whether a {point / triangle} intersects a tile in the image. We implement a generic coarse rasterization kernel that takes a set of 2D bounding boxes rather than geometric primitives; we then implement separate kernels that compute 2D bounding boxes for points and triangles.

This change does not affect the Python API at all. It also should not change any rasterization behavior, since this diff is just a refactoring of the existing logic.

I see this diff as the first in a few pieces of rasterizer refactoring. Followup diffs should do the following:
- Add a check for bin overflow in the generic coarse rasterizer kernel: allocate a global scalar to flag bin overflow which kernel worker threads can write to in case they detect bin overflow. The C++ launcher function can then check this flag after the kernel returns and issue a warning to the user in case of overflow.
- As a slightly more involved mechanism, if bin overflow is detected then the coarse kernel can continue running in order to count how many elements fall into each bin, without actually writing out their indices to the coarse output tensor. Then the actual number of entries per bin can be used to re-allocate the output tensor and re-run the coarse rasterization kernel so that bin overflow can be automatically avoided.
- The unification of the coarse and fine rasterization kernels also allows us to insert an extra CUDA kernel prior to coarse rasterization that filters out primitives outside the view frustum. This would be helpful for rendering full scenes (e.g. Matterport data) where only a small piece of the mesh is actually visible at any one time.

Reviewed By: bottler

Differential Revision: D25710361

fbshipit-source-id: 9c9dea512cb339c42adb3c92e7733fedd586ce1b
2021-09-08 16:17:30 -07:00
Justin Johnson
eed68f457d Refactor mesh coarse rasterization
Summary: Renaming parts of the mesh coarse rasterization and separating the bounding box calculation. All in preparation for sharing code with point rasterization.

Reviewed By: bottler

Differential Revision: D30369112

fbshipit-source-id: 3508c0b1239b355030cfa4038d5f3d6a945ebbf4
2021-09-08 16:17:30 -07:00
Justin Johnson
62dbf371ae Move coarse rasterization to new file
Summary: In preparation for sharing coarse rasterization between point clouds and meshes, move the functions to a new file. No code changes.

Reviewed By: bottler

Differential Revision: D30367812

fbshipit-source-id: 9e73835a26c4ac91f5c9f61ff682bc8218e36c6a
2021-09-08 16:17:30 -07:00
Jeremy Reizenstein
f2c44e3540 update test_build for robustness
Summary: Change cyclic deps test to be independent of test discovery order. Also let it work without plotly.

Reviewed By: nikhilaravi

Differential Revision: D30669614

fbshipit-source-id: 2eadf3f8b56b6096c5466ce53b4f8ac6df27b964
2021-09-02 09:32:29 -07:00
Jeremy Reizenstein
a9b0d50baf Restore missing linux conda builds
Summary: Regenerate config.yml after a recent bad merge which lost a few builds.

Reviewed By: nikhilaravi

Differential Revision: D30696918

fbshipit-source-id: 3ecdfca8682baed13692ec710aa7c25dbd24dd44
2021-09-01 10:29:05 -07:00
Nikhila Ravi
fc156b50c0 (bug) Fix exception when creating a TextureAtlas
Summary: Fixes GitHub issue #751. The vectorized implementation of bilinear interpolation didn't properly handle the edge cases in the same way as the `grid_sample` method in PyTorch.

Reviewed By: bottler

Differential Revision: D30684208

fbshipit-source-id: edf241ecbd72d46b94ad340a4e601e26c83db88e
2021-09-01 09:26:44 -07:00
Georgia Gkioxari
835e662fb5 master -> main
Summary: Replace master with main in hard coded paths or mentions in documentation

Reviewed By: bottler

Differential Revision: D30696097

fbshipit-source-id: d5ff67bb026d90d1543d10ab027f916e8361ca69
2021-09-01 05:33:25 -07:00
Jeremy Reizenstein
1b8d86a104 (breaking) image_size-agnostic GridRaySampler
Summary:
As suggested in #802. By not persisting the _xy_grid buffer, we can allow (in some cases) a model with one image_size to be loaded from a saved model which was trained at a different resolution.

Also avoid persisting _frequencies in HarmonicEmbedding for similar reasons.

BC-break: This will cause load_state_dict, in strict mode, to complain if you try to load an old model with the new code.

Reviewed By: patricklabatut

Differential Revision: D30349234

fbshipit-source-id: d6061d1e51c9f79a78d61a9f732c9a5dfadbbb47
2021-08-31 14:30:24 -07:00
Jeremy Reizenstein
1251446383 Use sample_pdf from PyTorch3D in NeRF
Summary:
Use PyTorch3D's new faster sample_pdf function instead of local Python implementation.

Also clarify deps for the Python implementation.

Reviewed By: gkioxari

Differential Revision: D30512109

fbshipit-source-id: 84cfdc00313fada37a6b29837de96f6a4646434f
2021-08-31 11:26:26 -07:00
Alex Naumann
d2bbd0cdb7 Fix link to render textured meshes example (#818)
Summary:
Great work! :)
Just found a link in the examples that is not working. This will fix it.

Best,
Alex

Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/818

Reviewed By: nikhilaravi

Differential Revision: D30637532

Pulled By: patricklabatut

fbshipit-source-id: ed6c52375d1e760cb0fb2c0a66648dfeb0c6ed46
2021-08-30 13:11:53 -07:00
Jeremy Reizenstein
6c416b319c remove PyTorch 1.4 builds
Summary: We won't support PyTorch 1.4 in the next release. PyTorch 1.5.0 came out in June 2020, more than a year ago.

Reviewed By: patricklabatut

Differential Revision: D30424388

fbshipit-source-id: 25499096066c9a2b909a0550394f5210409f0d74
2021-08-23 08:32:41 -07:00
Jeremy Reizenstein
77fa5987b8 check for cyclic deps
Summary: New test that each subpackage of pytorch3d imports cleanly.

Reviewed By: patricklabatut

Differential Revision: D30001632

fbshipit-source-id: ca8dcac94491fc22f33602b3bbef481cba927094
2021-08-23 06:16:40 -07:00
Pyre Bot Jr
fadec970c9 suppress errors in vision/fair/pytorch3d
Differential Revision: D30479084

fbshipit-source-id: 6b22dd0afe4dfb1be6249e43a56657519f11dcf1
2021-08-22 23:39:37 -07:00
Jeremy Reizenstein
1ea2b7272a sample_pdf CUDA and C++ implementations.
Summary: Implement the sample_pdf function from the NeRF project as compiled operators.. The binary search (in searchsorted) is replaced with a low tech linear search, but this is not a problem for the envisaged numbers of bins.

Reviewed By: gkioxari

Differential Revision: D26312535

fbshipit-source-id: df1c3119cd63d944380ed1b2657b6ad81d743e49
2021-08-17 08:07:55 -07:00
Jeremy Reizenstein
7d7d00f288 Move sample_pdf into PyTorch3D
Summary: Copy the sample_pdf operation from the NeRF project in to PyTorch3D, in preparation for optimizing it.

Reviewed By: gkioxari

Differential Revision: D27117930

fbshipit-source-id: 20286b007f589a4c4d53ed818c4bc5f2abd22833
2021-08-17 08:07:55 -07:00
Jeremy Reizenstein
b481cfbd01 Correct shape for default grid_sizes
Summary: Small fix for omitting this argument.

Reviewed By: nikhilaravi

Differential Revision: D29548610

fbshipit-source-id: f25032fab3faa2f09006f5fcf8628138555f2f20
2021-08-17 05:59:07 -07:00
Jeremy Reizenstein
46cf1970ac cpu benchmarks for points to volumes
Summary:
Add a CPU version to the benchmarks.

```
Benchmark                                                               Avg Time(μs)      Peak Time(μs) Iterations
--------------------------------------------------------------------------------
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[25, 25, 25]_1000                    10100           46422             50
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[25, 25, 25]_10000                   28442           32100             18
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[25, 25, 25]_100000                 241127          254269              3
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[101, 111, 121]_1000                 54149           79480             10
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[101, 111, 121]_10000               125459          212734              4
ADD_POINTS_TO_VOLUMES_cpu_10_trilinear_[101, 111, 121]_100000              512739          512739              1
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[25, 25, 25]_1000                       2866           13365            175
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[25, 25, 25]_10000                      7026           12604             72
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[25, 25, 25]_100000                    48822           55607             11
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[101, 111, 121]_1000                   38098           38576             14
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[101, 111, 121]_10000                  48006           54120             11
ADD_POINTS_TO_VOLUMES_cpu_10_nearest_[101, 111, 121]_100000                131563          138536              4
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[25, 25, 25]_1000                   64615           91735              8
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[25, 25, 25]_10000                 228815          246095              3
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[25, 25, 25]_100000               3086615         3086615              1
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[101, 111, 121]_1000               464298          465292              2
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[101, 111, 121]_10000             1053440         1053440              1
ADD_POINTS_TO_VOLUMES_cpu_100_trilinear_[101, 111, 121]_100000            6736236         6736236              1
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[25, 25, 25]_1000                     11940           12440             42
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[25, 25, 25]_10000                    56641           58051              9
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[25, 25, 25]_100000                  711492          711492              1
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[101, 111, 121]_1000                 326437          329846              2
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[101, 111, 121]_10000                418514          427911              2
ADD_POINTS_TO_VOLUMES_cpu_100_nearest_[101, 111, 121]_100000              1524285         1524285              1
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[25, 25, 25]_1000                  5949           13602             85
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[25, 25, 25]_10000                 5817           13001             86
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[25, 25, 25]_100000               23833           25971             21
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[101, 111, 121]_1000               9029           16178             56
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[101, 111, 121]_10000             11595           18601             44
ADD_POINTS_TO_VOLUMES_cuda:0_10_trilinear_[101, 111, 121]_100000            46986           47344             11
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[25, 25, 25]_1000                    2554            9747            196
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[25, 25, 25]_10000                   2676            9537            187
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[25, 25, 25]_100000                  6567           14179             77
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[101, 111, 121]_1000                 5840           12811             86
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[101, 111, 121]_10000                6102           13128             82
ADD_POINTS_TO_VOLUMES_cuda:0_10_nearest_[101, 111, 121]_100000              11945           11995             42
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[25, 25, 25]_1000                 7642           13671             66
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[25, 25, 25]_10000               25190           25260             20
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[25, 25, 25]_100000             212018          212134              3
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[101, 111, 121]_1000             40421           45692             13
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[101, 111, 121]_10000            92078           92132              6
ADD_POINTS_TO_VOLUMES_cuda:0_100_trilinear_[101, 111, 121]_100000          457211          457229              2
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[25, 25, 25]_1000                   3574           10377            140
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[25, 25, 25]_10000                  7222           13023             70
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[25, 25, 25]_100000                48127           48165             11
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[101, 111, 121]_1000               34732           35295             15
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[101, 111, 121]_10000              43050           51064             12
ADD_POINTS_TO_VOLUMES_cuda:0_100_nearest_[101, 111, 121]_100000            106028          106058              5
--------------------------------------------------------------------------------
```

Reviewed By: patricklabatut

Differential Revision: D29522830

fbshipit-source-id: 1e857db03613b0c6afcb68a58cdd7ba032e1a874
2021-08-17 05:59:07 -07:00
Jeremy Reizenstein
5491b46511 Points2vols doc fixes
Summary: Fixes to a couple of comments on points to volumes, make the mask work in round_points_to_volumes, and remove a duplicate rand calculation

Reviewed By: nikhilaravi

Differential Revision: D29522845

fbshipit-source-id: 86770ba37ef3942b909baf63fd73eed1399635b6
2021-08-17 05:59:07 -07:00
Jeremy Reizenstein
ae1387b523 let build tests run in conda
Summary: Much of the code is actually available during the conda tests, as long as we look in the right place. We enable some of them.

Reviewed By: nikhilaravi

Differential Revision: D30249357

fbshipit-source-id: 01c57b6b8c04442237965f23eded594aeb90abfb
2021-08-17 04:26:27 -07:00
Jeremy Reizenstein
b0dd0c8821 rename master branch to main
Summary: Change doc references to master branch to its new name main.

Reviewed By: nikhilaravi

Differential Revision: D30303018

fbshipit-source-id: cfdbb207dfe3366de7e0ca759ed56f4b8dd894d1
2021-08-16 04:06:53 -07:00
Nikhila Ravi
103da63393 Ball Query
Summary:
Implementation of ball query from PointNet++.  This function is similar to KNN (find the neighbors in p2 for all points in p1). These are the key differences:
-  It will return the **first** K neighbors within a specified radius as opposed to the **closest** K neighbors.
- As all the points in p2 do not need to be considered to find the closest K, the algorithm is much faster than KNN when p2 has a large number of points.
- The neighbors are not sorted
- Due to the radius threshold it is not guaranteed that there will be K neighbors even if there are more than K points in p2.
- The padding value for `idx` is -1 instead of 0.

# Note:
- Some of the code is very similar to KNN so it could be possible to modify the KNN forward kernels to support ball query.
- Some users might want to use kNN with ball query - for this we could provide a wrapper function around the current `knn_points` which enables applying the radius threshold afterwards as an alternative. This could be called `ball_query_knn`.

Reviewed By: jcjohnson

Differential Revision: D30261362

fbshipit-source-id: 66b6a7e0114beff7164daf7eba21546ff41ec450
2021-08-12 14:06:32 -07:00
Jeremy Reizenstein
e5c58a8a8b Test website metadata
Summary: New test that notes and tutorials are listed in the website metadata, so that they will be included in the website build.

Reviewed By: nikhilaravi

Differential Revision: D30223799

fbshipit-source-id: 2dca9730b54e68da2fd430a7b47cb7e18814d518
2021-08-12 05:07:55 -07:00
Jeremy Reizenstein
64faedfd57 Add new doc and new tutorials to website
Summary: Recent additions need to be included.

Reviewed By: nikhilaravi

Differential Revision: D30223717

fbshipit-source-id: 4b29a4132ea6fb7c1a530aac5d1e36aa61c663bb
2021-08-12 05:07:55 -07:00
Pyre Bot Jr
9db70400d8 suppress errors in fbcode/vision - batch 2
Differential Revision: D30222339

fbshipit-source-id: 97d498df72ef897b8dc2405764e3ffd432082e3c
2021-08-10 10:21:59 -07:00
Nikhila Ravi
804117833e Fix to allow cameras in the renderer forward pass
Summary: Fix to resolve GitHub issue #796 - the cameras were being passed in the renderer forward pass instead of at initialization. The rasterizer was correctly using the cameras passed in the `kwargs` for the projection, but the `cameras` are still part of the `kwargs` for the `get_screen_to_ndc_transform` and `get_ndc_to_screen_transform` functions which is causing issues about duplicate arguments.

Reviewed By: bottler

Differential Revision: D30175679

fbshipit-source-id: 547e88d8439456e728fa2772722df5fa0fe4584d
2021-08-09 11:42:50 -07:00
667 changed files with 53084 additions and 3851 deletions

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,5 +1,5 @@
#!/bin/bash -e #!/bin/bash -e
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -18,20 +18,13 @@ setupcuda: &setupcuda
working_directory: ~/ working_directory: ~/
command: | command: |
# download and install nvidia drivers, cuda, etc # download and install nvidia drivers, cuda, etc
wget --no-verbose --no-clobber -P ~/nvidia-downloads https://developer.download.nvidia.com/compute/cuda/11.2.2/local_installers/cuda_11.2.2_460.32.03_linux.run wget --no-verbose --no-clobber -P ~/nvidia-downloads https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda_11.3.1_465.19.01_linux.run
sudo sh ~/nvidia-downloads/cuda_11.2.2_460.32.03_linux.run --silent sudo sh ~/nvidia-downloads/cuda_11.3.1_465.19.01_linux.run --silent
echo "Done installing CUDA." echo "Done installing CUDA."
pyenv versions pyenv versions
nvidia-smi nvidia-smi
pyenv global 3.9.1 pyenv global 3.9.1
gpu: &gpu
environment:
CUDA_VERSION: "10.2"
machine:
image: default
resource_class: gpu.medium # tesla m60
binary_common: &binary_common binary_common: &binary_common
parameters: parameters:
# Edit these defaults to do a release` # Edit these defaults to do a release`
@@ -54,42 +47,41 @@ binary_common: &binary_common
description: "Wheel only: what docker image to use" description: "Wheel only: what docker image to use"
type: string type: string
default: "pytorch/manylinux-cuda101" default: "pytorch/manylinux-cuda101"
conda_docker_image:
description: "what docker image to use for docker"
type: string
default: "pytorch/conda-cuda"
environment: environment:
PYTHON_VERSION: << parameters.python_version >> PYTHON_VERSION: << parameters.python_version >>
BUILD_VERSION: << parameters.build_version >> BUILD_VERSION: << parameters.build_version >>
PYTORCH_VERSION: << parameters.pytorch_version >> PYTORCH_VERSION: << parameters.pytorch_version >>
CU_VERSION: << parameters.cu_version >> CU_VERSION: << parameters.cu_version >>
TESTRUN_DOCKER_IMAGE: << parameters.conda_docker_image >>
jobs: jobs:
main: main:
<<: *gpu environment:
CUDA_VERSION: "11.3"
resource_class: gpu.nvidia.small.multi
machine: machine:
image: ubuntu-2004:202101-01 image: ubuntu-2004:202101-01
steps: steps:
- checkout - checkout
- <<: *setupcuda - <<: *setupcuda
- run: pip3 install --progress-bar off imageio wheel matplotlib 'pillow<7' - run: pip3 install --progress-bar off imageio wheel matplotlib 'pillow<7'
- run: pip3 install --progress-bar off torch torchvision - run: pip3 install --progress-bar off torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
# - run: conda create -p ~/conda_env python=3.7 numpy # - run: conda create -p ~/conda_env python=3.7 numpy
# - run: conda activate ~/conda_env # - run: conda activate ~/conda_env
# - run: conda install -c pytorch pytorch torchvision # - run: conda install -c pytorch pytorch torchvision
- run: pip3 install --progress-bar off 'git+https://github.com/facebookresearch/fvcore' - run: pip3 install --progress-bar off 'git+https://github.com/facebookresearch/fvcore'
- run: pip3 install --progress-bar off 'git+https://github.com/facebookresearch/iopath' - run: pip3 install --progress-bar off 'git+https://github.com/facebookresearch/iopath'
- run:
name: get cub
command: |
cd ..
wget --no-verbose https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
tar xzf 1.10.0.tar.gz
# This expands to a directory called cub-1.10.0
- run: - run:
name: build name: build
command: | command: |
export LD_LIBRARY_PATH=$LD_LIBARY_PATH:/usr/local/cuda-11.2/lib64 export LD_LIBRARY_PATH=$LD_LIBARY_PATH:/usr/local/cuda-11.3/lib64
export CUB_HOME=$(realpath ../cub-1.10.0)
python3 setup.py build_ext --inplace python3 setup.py build_ext --inplace
- run: LD_LIBRARY_PATH=$LD_LIBARY_PATH:/usr/local/cuda-11.2/lib64 python -m unittest discover -v -s tests - run: LD_LIBRARY_PATH=$LD_LIBARY_PATH:/usr/local/cuda-11.3/lib64 python -m unittest discover -v -s tests -t .
- run: python3 setup.py bdist_wheel - run: python3 setup.py bdist_wheel
binary_linux_wheel: binary_linux_wheel:
@@ -113,7 +105,7 @@ jobs:
binary_linux_conda: binary_linux_conda:
<<: *binary_common <<: *binary_common
docker: docker:
- image: "pytorch/conda-cuda" - image: "<< parameters.conda_docker_image >>"
auth: auth:
username: $DOCKERHUB_USERNAME username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_TOKEN password: $DOCKERHUB_TOKEN
@@ -136,62 +128,21 @@ jobs:
binary_linux_conda_cuda: binary_linux_conda_cuda:
<<: *binary_common <<: *binary_common
machine: machine:
image: ubuntu-1604:201903-01 image: ubuntu-1604-cuda-10.2:202012-01
resource_class: gpu.medium resource_class: gpu.nvidia.small.multi
steps: steps:
- checkout - checkout
- run:
name: Setup environment
command: |
set -e
curl -L https://packagecloud.io/circleci/trusty/gpgkey | sudo apt-key add -
curl -L https://dl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
export DOCKER_VERSION="5:19.03.2~3-0~ubuntu-xenial"
sudo apt-get install docker-ce=${DOCKER_VERSION} docker-ce-cli=${DOCKER_VERSION} containerd.io=1.2.6-3
# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
export NVIDIA_CONTAINER_VERSION="1.0.3-1"
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit=${NVIDIA_CONTAINER_VERSION}
sudo systemctl restart docker
DRIVER_FN="NVIDIA-Linux-x86_64-460.84.run"
wget "https://us.download.nvidia.com/XFree86/Linux-x86_64/460.84/$DRIVER_FN"
sudo /bin/bash "$DRIVER_FN" -s --no-drm || (sudo cat /var/log/nvidia-installer.log && false)
nvidia-smi
- run: - run:
name: Pull docker image name: Pull docker image
command: | command: |
nvidia-smi
set -e set -e
{ docker login -u="$DOCKERHUB_USERNAME" -p="$DOCKERHUB_TOKEN" ; } 2> /dev/null { docker login -u="$DOCKERHUB_USERNAME" -p="$DOCKERHUB_TOKEN" ; } 2> /dev/null
DOCKER_IMAGE=pytorch/conda-cuda echo Pulling docker image $TESTRUN_DOCKER_IMAGE
echo Pulling docker image $DOCKER_IMAGE docker pull $TESTRUN_DOCKER_IMAGE
docker pull $DOCKER_IMAGE
- run: - run:
name: Build and run tests name: Build and run tests
no_output_timeout: 20m no_output_timeout: 20m
@@ -200,11 +151,10 @@ jobs:
cd ${HOME}/project/ cd ${HOME}/project/
DOCKER_IMAGE=pytorch/conda-cuda
export JUST_TESTRUN=1 export JUST_TESTRUN=1
VARS_TO_PASS="-e PYTHON_VERSION -e BUILD_VERSION -e PYTORCH_VERSION -e CU_VERSION -e JUST_TESTRUN" VARS_TO_PASS="-e PYTHON_VERSION -e BUILD_VERSION -e PYTORCH_VERSION -e CU_VERSION -e JUST_TESTRUN"
docker run --gpus all --ipc=host -v $(pwd):/remote -w /remote ${VARS_TO_PASS} ${DOCKER_IMAGE} ./packaging/build_conda.sh docker run --gpus all --ipc=host -v $(pwd):/remote -w /remote ${VARS_TO_PASS} ${TESTRUN_DOCKER_IMAGE} ./packaging/build_conda.sh
binary_macos_wheel: binary_macos_wheel:
<<: *binary_common <<: *binary_common
@@ -228,50 +178,27 @@ workflows:
version: 2 version: 2
build_and_test: build_and_test:
jobs: jobs:
- main: # - main:
context: DOCKERHUB_TOKEN # context: DOCKERHUB_TOKEN
{{workflows()}} {{workflows()}}
- binary_linux_conda_cuda:
name: testrun_conda_cuda_py36_cu101_pyt14
context: DOCKERHUB_TOKEN
python_version: "3.6"
pytorch_version: "1.4"
cu_version: "cu101"
- binary_linux_conda_cuda: - binary_linux_conda_cuda:
name: testrun_conda_cuda_py37_cu102_pyt190 name: testrun_conda_cuda_py37_cu102_pyt190
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
python_version: "3.7" python_version: "3.7"
pytorch_version: '1.9.0' pytorch_version: '1.9.0'
cu_version: "cu102" cu_version: "cu102"
- binary_linux_conda_cuda:
name: testrun_conda_cuda_py37_cu110_pyt170
context: DOCKERHUB_TOKEN
python_version: "3.7"
pytorch_version: '1.7.0'
cu_version: "cu110"
- binary_linux_conda_cuda:
name: testrun_conda_cuda_py39_cu111_pyt181
context: DOCKERHUB_TOKEN
python_version: "3.9"
pytorch_version: '1.8.1'
cu_version: "cu111"
- binary_macos_wheel:
cu_version: cpu
name: macos_wheel_py36_cpu
python_version: '3.6'
pytorch_version: '1.9.0'
- binary_macos_wheel: - binary_macos_wheel:
cu_version: cpu cu_version: cpu
name: macos_wheel_py37_cpu name: macos_wheel_py37_cpu
python_version: '3.7' python_version: '3.7'
pytorch_version: '1.9.0' pytorch_version: '1.12.0'
- binary_macos_wheel: - binary_macos_wheel:
cu_version: cpu cu_version: cpu
name: macos_wheel_py38_cpu name: macos_wheel_py38_cpu
python_version: '3.8' python_version: '3.8'
pytorch_version: '1.9.0' pytorch_version: '1.12.0'
- binary_macos_wheel: - binary_macos_wheel:
cu_version: cpu cu_version: cpu
name: macos_wheel_py39_cpu name: macos_wheel_py39_cpu
python_version: '3.9' python_version: '3.9'
pytorch_version: '1.9.0' pytorch_version: '1.12.0'

View File

@@ -18,20 +18,13 @@ setupcuda: &setupcuda
working_directory: ~/ working_directory: ~/
command: | command: |
# download and install nvidia drivers, cuda, etc # download and install nvidia drivers, cuda, etc
wget --no-verbose --no-clobber -P ~/nvidia-downloads https://developer.download.nvidia.com/compute/cuda/11.2.2/local_installers/cuda_11.2.2_460.32.03_linux.run wget --no-verbose --no-clobber -P ~/nvidia-downloads https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda_11.3.1_465.19.01_linux.run
sudo sh ~/nvidia-downloads/cuda_11.2.2_460.32.03_linux.run --silent sudo sh ~/nvidia-downloads/cuda_11.3.1_465.19.01_linux.run --silent
echo "Done installing CUDA." echo "Done installing CUDA."
pyenv versions pyenv versions
nvidia-smi nvidia-smi
pyenv global 3.9.1 pyenv global 3.9.1
gpu: &gpu
environment:
CUDA_VERSION: "10.2"
machine:
image: default
resource_class: gpu.medium # tesla m60
binary_common: &binary_common binary_common: &binary_common
parameters: parameters:
# Edit these defaults to do a release` # Edit these defaults to do a release`
@@ -54,42 +47,41 @@ binary_common: &binary_common
description: "Wheel only: what docker image to use" description: "Wheel only: what docker image to use"
type: string type: string
default: "pytorch/manylinux-cuda101" default: "pytorch/manylinux-cuda101"
conda_docker_image:
description: "what docker image to use for docker"
type: string
default: "pytorch/conda-cuda"
environment: environment:
PYTHON_VERSION: << parameters.python_version >> PYTHON_VERSION: << parameters.python_version >>
BUILD_VERSION: << parameters.build_version >> BUILD_VERSION: << parameters.build_version >>
PYTORCH_VERSION: << parameters.pytorch_version >> PYTORCH_VERSION: << parameters.pytorch_version >>
CU_VERSION: << parameters.cu_version >> CU_VERSION: << parameters.cu_version >>
TESTRUN_DOCKER_IMAGE: << parameters.conda_docker_image >>
jobs: jobs:
main: main:
<<: *gpu environment:
CUDA_VERSION: "11.3"
resource_class: gpu.nvidia.small.multi
machine: machine:
image: ubuntu-2004:202101-01 image: ubuntu-2004:202101-01
steps: steps:
- checkout - checkout
- <<: *setupcuda - <<: *setupcuda
- run: pip3 install --progress-bar off imageio wheel matplotlib 'pillow<7' - run: pip3 install --progress-bar off imageio wheel matplotlib 'pillow<7'
- run: pip3 install --progress-bar off torch torchvision - run: pip3 install --progress-bar off torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
# - run: conda create -p ~/conda_env python=3.7 numpy # - run: conda create -p ~/conda_env python=3.7 numpy
# - run: conda activate ~/conda_env # - run: conda activate ~/conda_env
# - run: conda install -c pytorch pytorch torchvision # - run: conda install -c pytorch pytorch torchvision
- run: pip3 install --progress-bar off 'git+https://github.com/facebookresearch/fvcore' - run: pip3 install --progress-bar off 'git+https://github.com/facebookresearch/fvcore'
- run: pip3 install --progress-bar off 'git+https://github.com/facebookresearch/iopath' - run: pip3 install --progress-bar off 'git+https://github.com/facebookresearch/iopath'
- run:
name: get cub
command: |
cd ..
wget --no-verbose https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
tar xzf 1.10.0.tar.gz
# This expands to a directory called cub-1.10.0
- run: - run:
name: build name: build
command: | command: |
export LD_LIBRARY_PATH=$LD_LIBARY_PATH:/usr/local/cuda-11.2/lib64 export LD_LIBRARY_PATH=$LD_LIBARY_PATH:/usr/local/cuda-11.3/lib64
export CUB_HOME=$(realpath ../cub-1.10.0)
python3 setup.py build_ext --inplace python3 setup.py build_ext --inplace
- run: LD_LIBRARY_PATH=$LD_LIBARY_PATH:/usr/local/cuda-11.2/lib64 python -m unittest discover -v -s tests - run: LD_LIBRARY_PATH=$LD_LIBARY_PATH:/usr/local/cuda-11.3/lib64 python -m unittest discover -v -s tests -t .
- run: python3 setup.py bdist_wheel - run: python3 setup.py bdist_wheel
binary_linux_wheel: binary_linux_wheel:
@@ -113,7 +105,7 @@ jobs:
binary_linux_conda: binary_linux_conda:
<<: *binary_common <<: *binary_common
docker: docker:
- image: "pytorch/conda-cuda" - image: "<< parameters.conda_docker_image >>"
auth: auth:
username: $DOCKERHUB_USERNAME username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_TOKEN password: $DOCKERHUB_TOKEN
@@ -136,62 +128,21 @@ jobs:
binary_linux_conda_cuda: binary_linux_conda_cuda:
<<: *binary_common <<: *binary_common
machine: machine:
image: ubuntu-1604:201903-01 image: ubuntu-1604-cuda-10.2:202012-01
resource_class: gpu.medium resource_class: gpu.nvidia.small.multi
steps: steps:
- checkout - checkout
- run:
name: Setup environment
command: |
set -e
curl -L https://packagecloud.io/circleci/trusty/gpgkey | sudo apt-key add -
curl -L https://dl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
export DOCKER_VERSION="5:19.03.2~3-0~ubuntu-xenial"
sudo apt-get install docker-ce=${DOCKER_VERSION} docker-ce-cli=${DOCKER_VERSION} containerd.io=1.2.6-3
# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
export NVIDIA_CONTAINER_VERSION="1.0.3-1"
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit=${NVIDIA_CONTAINER_VERSION}
sudo systemctl restart docker
DRIVER_FN="NVIDIA-Linux-x86_64-460.84.run"
wget "https://us.download.nvidia.com/XFree86/Linux-x86_64/460.84/$DRIVER_FN"
sudo /bin/bash "$DRIVER_FN" -s --no-drm || (sudo cat /var/log/nvidia-installer.log && false)
nvidia-smi
- run: - run:
name: Pull docker image name: Pull docker image
command: | command: |
nvidia-smi
set -e set -e
{ docker login -u="$DOCKERHUB_USERNAME" -p="$DOCKERHUB_TOKEN" ; } 2> /dev/null { docker login -u="$DOCKERHUB_USERNAME" -p="$DOCKERHUB_TOKEN" ; } 2> /dev/null
DOCKER_IMAGE=pytorch/conda-cuda echo Pulling docker image $TESTRUN_DOCKER_IMAGE
echo Pulling docker image $DOCKER_IMAGE docker pull $TESTRUN_DOCKER_IMAGE
docker pull $DOCKER_IMAGE
- run: - run:
name: Build and run tests name: Build and run tests
no_output_timeout: 20m no_output_timeout: 20m
@@ -200,11 +151,10 @@ jobs:
cd ${HOME}/project/ cd ${HOME}/project/
DOCKER_IMAGE=pytorch/conda-cuda
export JUST_TESTRUN=1 export JUST_TESTRUN=1
VARS_TO_PASS="-e PYTHON_VERSION -e BUILD_VERSION -e PYTORCH_VERSION -e CU_VERSION -e JUST_TESTRUN" VARS_TO_PASS="-e PYTHON_VERSION -e BUILD_VERSION -e PYTORCH_VERSION -e CU_VERSION -e JUST_TESTRUN"
docker run --gpus all --ipc=host -v $(pwd):/remote -w /remote ${VARS_TO_PASS} ${DOCKER_IMAGE} ./packaging/build_conda.sh docker run --gpus all --ipc=host -v $(pwd):/remote -w /remote ${VARS_TO_PASS} ${TESTRUN_DOCKER_IMAGE} ./packaging/build_conda.sh
binary_macos_wheel: binary_macos_wheel:
<<: *binary_common <<: *binary_common
@@ -228,260 +178,8 @@ workflows:
version: 2 version: 2
build_and_test: build_and_test:
jobs: jobs:
- main: # - main:
context: DOCKERHUB_TOKEN # context: DOCKERHUB_TOKEN
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu92
name: linux_conda_py36_cu92_pyt14
python_version: '3.6'
pytorch_version: '1.4'
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py36_cu101_pyt14
python_version: '3.6'
pytorch_version: '1.4'
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu92
name: linux_conda_py36_cu92_pyt150
python_version: '3.6'
pytorch_version: 1.5.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py36_cu101_pyt150
python_version: '3.6'
pytorch_version: 1.5.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py36_cu102_pyt150
python_version: '3.6'
pytorch_version: 1.5.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu92
name: linux_conda_py36_cu92_pyt151
python_version: '3.6'
pytorch_version: 1.5.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py36_cu101_pyt151
python_version: '3.6'
pytorch_version: 1.5.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py36_cu102_pyt151
python_version: '3.6'
pytorch_version: 1.5.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu92
name: linux_conda_py36_cu92_pyt160
python_version: '3.6'
pytorch_version: 1.6.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py36_cu101_pyt160
python_version: '3.6'
pytorch_version: 1.6.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py36_cu102_pyt160
python_version: '3.6'
pytorch_version: 1.6.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py36_cu101_pyt170
python_version: '3.6'
pytorch_version: 1.7.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py36_cu102_pyt170
python_version: '3.6'
pytorch_version: 1.7.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu110
name: linux_conda_py36_cu110_pyt170
python_version: '3.6'
pytorch_version: 1.7.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py36_cu101_pyt171
python_version: '3.6'
pytorch_version: 1.7.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py36_cu102_pyt171
python_version: '3.6'
pytorch_version: 1.7.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu110
name: linux_conda_py36_cu110_pyt171
python_version: '3.6'
pytorch_version: 1.7.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py36_cu101_pyt180
python_version: '3.6'
pytorch_version: 1.8.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py36_cu102_pyt180
python_version: '3.6'
pytorch_version: 1.8.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py36_cu111_pyt180
python_version: '3.6'
pytorch_version: 1.8.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py36_cu101_pyt181
python_version: '3.6'
pytorch_version: 1.8.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py36_cu102_pyt181
python_version: '3.6'
pytorch_version: 1.8.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py36_cu111_pyt181
python_version: '3.6'
pytorch_version: 1.8.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py36_cu102_pyt190
python_version: '3.6'
pytorch_version: 1.9.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py36_cu111_pyt190
python_version: '3.6'
pytorch_version: 1.9.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu92
name: linux_conda_py37_cu92_pyt14
python_version: '3.7'
pytorch_version: '1.4'
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py37_cu101_pyt14
python_version: '3.7'
pytorch_version: '1.4'
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu92
name: linux_conda_py37_cu92_pyt150
python_version: '3.7'
pytorch_version: 1.5.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py37_cu101_pyt150
python_version: '3.7'
pytorch_version: 1.5.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py37_cu102_pyt150
python_version: '3.7'
pytorch_version: 1.5.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu92
name: linux_conda_py37_cu92_pyt151
python_version: '3.7'
pytorch_version: 1.5.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py37_cu101_pyt151
python_version: '3.7'
pytorch_version: 1.5.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py37_cu102_pyt151
python_version: '3.7'
pytorch_version: 1.5.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu92
name: linux_conda_py37_cu92_pyt160
python_version: '3.7'
pytorch_version: 1.6.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py37_cu101_pyt160
python_version: '3.7'
pytorch_version: 1.6.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py37_cu102_pyt160
python_version: '3.7'
pytorch_version: 1.6.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py37_cu101_pyt170
python_version: '3.7'
pytorch_version: 1.7.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py37_cu102_pyt170
python_version: '3.7'
pytorch_version: 1.7.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu110
name: linux_conda_py37_cu110_pyt170
python_version: '3.7'
pytorch_version: 1.7.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py37_cu101_pyt171
python_version: '3.7'
pytorch_version: 1.7.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py37_cu102_pyt171
python_version: '3.7'
pytorch_version: 1.7.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu110
name: linux_conda_py37_cu110_pyt171
python_version: '3.7'
pytorch_version: 1.7.1
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu101 cu_version: cu101
@@ -532,106 +230,119 @@ workflows:
pytorch_version: 1.9.0 pytorch_version: 1.9.0
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu92 cu_version: cu102
name: linux_conda_py38_cu92_pyt14 name: linux_conda_py37_cu102_pyt191
python_version: '3.8' python_version: '3.7'
pytorch_version: '1.4' pytorch_version: 1.9.1
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu101 cu_version: cu111
name: linux_conda_py38_cu101_pyt14 name: linux_conda_py37_cu111_pyt191
python_version: '3.8' python_version: '3.7'
pytorch_version: '1.4' pytorch_version: 1.9.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu92
name: linux_conda_py38_cu92_pyt150
python_version: '3.8'
pytorch_version: 1.5.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu101
name: linux_conda_py38_cu101_pyt150
python_version: '3.8'
pytorch_version: 1.5.0
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu102 cu_version: cu102
name: linux_conda_py38_cu102_pyt150 name: linux_conda_py37_cu102_pyt1100
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.5.0 pytorch_version: 1.10.0
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu92 cu_version: cu111
name: linux_conda_py38_cu92_pyt151 name: linux_conda_py37_cu111_pyt1100
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.5.1 pytorch_version: 1.10.0
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu101 cu_version: cu113
name: linux_conda_py38_cu101_pyt151 name: linux_conda_py37_cu113_pyt1100
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.5.1 pytorch_version: 1.10.0
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu102 cu_version: cu102
name: linux_conda_py38_cu102_pyt151 name: linux_conda_py37_cu102_pyt1101
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.5.1 pytorch_version: 1.10.1
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu92 cu_version: cu111
name: linux_conda_py38_cu92_pyt160 name: linux_conda_py37_cu111_pyt1101
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.6.0 pytorch_version: 1.10.1
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu101 cu_version: cu113
name: linux_conda_py38_cu101_pyt160 name: linux_conda_py37_cu113_pyt1101
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.6.0 pytorch_version: 1.10.1
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu102 cu_version: cu102
name: linux_conda_py38_cu102_pyt160 name: linux_conda_py37_cu102_pyt1102
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.6.0 pytorch_version: 1.10.2
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu101 cu_version: cu111
name: linux_conda_py38_cu101_pyt170 name: linux_conda_py37_cu111_pyt1102
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.7.0 pytorch_version: 1.10.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py37_cu113_pyt1102
python_version: '3.7'
pytorch_version: 1.10.2
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu102 cu_version: cu102
name: linux_conda_py38_cu102_pyt170 name: linux_conda_py37_cu102_pyt1110
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.7.0 pytorch_version: 1.11.0
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu110 cu_version: cu111
name: linux_conda_py38_cu110_pyt170 name: linux_conda_py37_cu111_pyt1110
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.7.0 pytorch_version: 1.11.0
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu101 cu_version: cu113
name: linux_conda_py38_cu101_pyt171 name: linux_conda_py37_cu113_pyt1110
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.7.1 pytorch_version: 1.11.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda115
context: DOCKERHUB_TOKEN
cu_version: cu115
name: linux_conda_py37_cu115_pyt1110
python_version: '3.7'
pytorch_version: 1.11.0
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu102 cu_version: cu102
name: linux_conda_py38_cu102_pyt171 name: linux_conda_py37_cu102_pyt1120
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.7.1 pytorch_version: 1.12.0
- binary_linux_conda: - binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu110 cu_version: cu113
name: linux_conda_py38_cu110_pyt171 name: linux_conda_py37_cu113_pyt1120
python_version: '3.8' python_version: '3.7'
pytorch_version: 1.7.1 pytorch_version: 1.12.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py37_cu116_pyt1120
python_version: '3.7'
pytorch_version: 1.12.0
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu101 cu_version: cu101
@@ -682,22 +393,119 @@ workflows:
pytorch_version: 1.9.0 pytorch_version: 1.9.0
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu101 cu_version: cu102
name: linux_conda_py39_cu101_pyt171 name: linux_conda_py38_cu102_pyt191
python_version: '3.9' python_version: '3.8'
pytorch_version: 1.7.1 pytorch_version: 1.9.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py38_cu111_pyt191
python_version: '3.8'
pytorch_version: 1.9.1
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu102 cu_version: cu102
name: linux_conda_py39_cu102_pyt171 name: linux_conda_py38_cu102_pyt1100
python_version: '3.9' python_version: '3.8'
pytorch_version: 1.7.1 pytorch_version: 1.10.0
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu110 cu_version: cu111
name: linux_conda_py39_cu110_pyt171 name: linux_conda_py38_cu111_pyt1100
python_version: '3.9' python_version: '3.8'
pytorch_version: 1.7.1 pytorch_version: 1.10.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py38_cu113_pyt1100
python_version: '3.8'
pytorch_version: 1.10.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py38_cu102_pyt1101
python_version: '3.8'
pytorch_version: 1.10.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py38_cu111_pyt1101
python_version: '3.8'
pytorch_version: 1.10.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py38_cu113_pyt1101
python_version: '3.8'
pytorch_version: 1.10.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py38_cu102_pyt1102
python_version: '3.8'
pytorch_version: 1.10.2
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py38_cu111_pyt1102
python_version: '3.8'
pytorch_version: 1.10.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py38_cu113_pyt1102
python_version: '3.8'
pytorch_version: 1.10.2
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py38_cu102_pyt1110
python_version: '3.8'
pytorch_version: 1.11.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py38_cu111_pyt1110
python_version: '3.8'
pytorch_version: 1.11.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py38_cu113_pyt1110
python_version: '3.8'
pytorch_version: 1.11.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda115
context: DOCKERHUB_TOKEN
cu_version: cu115
name: linux_conda_py38_cu115_pyt1110
python_version: '3.8'
pytorch_version: 1.11.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py38_cu102_pyt1120
python_version: '3.8'
pytorch_version: 1.12.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py38_cu113_pyt1120
python_version: '3.8'
pytorch_version: 1.12.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py38_cu116_pyt1120
python_version: '3.8'
pytorch_version: 1.12.0
- binary_linux_conda: - binary_linux_conda:
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
cu_version: cu101 cu_version: cu101
@@ -746,47 +554,185 @@ workflows:
name: linux_conda_py39_cu111_pyt190 name: linux_conda_py39_cu111_pyt190
python_version: '3.9' python_version: '3.9'
pytorch_version: 1.9.0 pytorch_version: 1.9.0
- binary_linux_conda_cuda: - binary_linux_conda:
name: testrun_conda_cuda_py36_cu101_pyt14
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
python_version: "3.6" cu_version: cu102
pytorch_version: "1.4" name: linux_conda_py39_cu102_pyt191
cu_version: "cu101" python_version: '3.9'
pytorch_version: 1.9.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py39_cu111_pyt191
python_version: '3.9'
pytorch_version: 1.9.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py39_cu102_pyt1100
python_version: '3.9'
pytorch_version: 1.10.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py39_cu111_pyt1100
python_version: '3.9'
pytorch_version: 1.10.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py39_cu113_pyt1100
python_version: '3.9'
pytorch_version: 1.10.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py39_cu102_pyt1101
python_version: '3.9'
pytorch_version: 1.10.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py39_cu111_pyt1101
python_version: '3.9'
pytorch_version: 1.10.1
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py39_cu113_pyt1101
python_version: '3.9'
pytorch_version: 1.10.1
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py39_cu102_pyt1102
python_version: '3.9'
pytorch_version: 1.10.2
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py39_cu111_pyt1102
python_version: '3.9'
pytorch_version: 1.10.2
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py39_cu113_pyt1102
python_version: '3.9'
pytorch_version: 1.10.2
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py39_cu102_pyt1110
python_version: '3.9'
pytorch_version: 1.11.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py39_cu111_pyt1110
python_version: '3.9'
pytorch_version: 1.11.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py39_cu113_pyt1110
python_version: '3.9'
pytorch_version: 1.11.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda115
context: DOCKERHUB_TOKEN
cu_version: cu115
name: linux_conda_py39_cu115_pyt1110
python_version: '3.9'
pytorch_version: 1.11.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py39_cu102_pyt1120
python_version: '3.9'
pytorch_version: 1.12.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py39_cu113_pyt1120
python_version: '3.9'
pytorch_version: 1.12.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py39_cu116_pyt1120
python_version: '3.9'
pytorch_version: 1.12.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py310_cu102_pyt1110
python_version: '3.10'
pytorch_version: 1.11.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu111
name: linux_conda_py310_cu111_pyt1110
python_version: '3.10'
pytorch_version: 1.11.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py310_cu113_pyt1110
python_version: '3.10'
pytorch_version: 1.11.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda115
context: DOCKERHUB_TOKEN
cu_version: cu115
name: linux_conda_py310_cu115_pyt1110
python_version: '3.10'
pytorch_version: 1.11.0
- binary_linux_conda:
context: DOCKERHUB_TOKEN
cu_version: cu102
name: linux_conda_py310_cu102_pyt1120
python_version: '3.10'
pytorch_version: 1.12.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda113
context: DOCKERHUB_TOKEN
cu_version: cu113
name: linux_conda_py310_cu113_pyt1120
python_version: '3.10'
pytorch_version: 1.12.0
- binary_linux_conda:
conda_docker_image: pytorch/conda-builder:cuda116
context: DOCKERHUB_TOKEN
cu_version: cu116
name: linux_conda_py310_cu116_pyt1120
python_version: '3.10'
pytorch_version: 1.12.0
- binary_linux_conda_cuda: - binary_linux_conda_cuda:
name: testrun_conda_cuda_py37_cu102_pyt190 name: testrun_conda_cuda_py37_cu102_pyt190
context: DOCKERHUB_TOKEN context: DOCKERHUB_TOKEN
python_version: "3.7" python_version: "3.7"
pytorch_version: '1.9.0' pytorch_version: '1.9.0'
cu_version: "cu102" cu_version: "cu102"
- binary_linux_conda_cuda:
name: testrun_conda_cuda_py37_cu110_pyt170
context: DOCKERHUB_TOKEN
python_version: "3.7"
pytorch_version: '1.7.0'
cu_version: "cu110"
- binary_linux_conda_cuda:
name: testrun_conda_cuda_py39_cu111_pyt181
context: DOCKERHUB_TOKEN
python_version: "3.9"
pytorch_version: '1.8.1'
cu_version: "cu111"
- binary_macos_wheel:
cu_version: cpu
name: macos_wheel_py36_cpu
python_version: '3.6'
pytorch_version: '1.9.0'
- binary_macos_wheel: - binary_macos_wheel:
cu_version: cpu cu_version: cpu
name: macos_wheel_py37_cpu name: macos_wheel_py37_cpu
python_version: '3.7' python_version: '3.7'
pytorch_version: '1.9.0' pytorch_version: '1.12.0'
- binary_macos_wheel: - binary_macos_wheel:
cu_version: cpu cu_version: cpu
name: macos_wheel_py38_cpu name: macos_wheel_py38_cpu
python_version: '3.8' python_version: '3.8'
pytorch_version: '1.9.0' pytorch_version: '1.12.0'
- binary_macos_wheel: - binary_macos_wheel:
cu_version: cpu cu_version: cpu
name: macos_wheel_py39_cpu name: macos_wheel_py39_cpu
python_version: '3.9' python_version: '3.9'
pytorch_version: '1.9.0' pytorch_version: '1.12.0'

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
@@ -13,35 +13,58 @@ import os.path
import jinja2 import jinja2
import yaml import yaml
from packaging import version
# The CUDA versions which have pytorch conda packages available for linux for each # The CUDA versions which have pytorch conda packages available for linux for each
# version of pytorch. # version of pytorch.
# Pytorch 1.4 also supports cuda 10.0 but we no longer build for cuda 10.0 at all. # Pytorch 1.4 also supports cuda 10.0 but we no longer build for cuda 10.0 at all.
CONDA_CUDA_VERSIONS = { CONDA_CUDA_VERSIONS = {
"1.4": ["cu92", "cu101"],
"1.5.0": ["cu92", "cu101", "cu102"],
"1.5.1": ["cu92", "cu101", "cu102"],
"1.6.0": ["cu92", "cu101", "cu102"],
"1.7.0": ["cu101", "cu102", "cu110"],
"1.7.1": ["cu101", "cu102", "cu110"],
"1.8.0": ["cu101", "cu102", "cu111"], "1.8.0": ["cu101", "cu102", "cu111"],
"1.8.1": ["cu101", "cu102", "cu111"], "1.8.1": ["cu101", "cu102", "cu111"],
"1.9.0": ["cu102", "cu111"], "1.9.0": ["cu102", "cu111"],
"1.9.1": ["cu102", "cu111"],
"1.10.0": ["cu102", "cu111", "cu113"],
"1.10.1": ["cu102", "cu111", "cu113"],
"1.10.2": ["cu102", "cu111", "cu113"],
"1.11.0": ["cu102", "cu111", "cu113", "cu115"],
"1.12.0": ["cu102", "cu113", "cu116"],
} }
def conda_docker_image_for_cuda(cuda_version):
if cuda_version in ("cu101", "cu102", "cu111"):
return None
if cuda_version == "cu113":
return "pytorch/conda-builder:cuda113"
if cuda_version == "cu115":
return "pytorch/conda-builder:cuda115"
if cuda_version == "cu116":
return "pytorch/conda-builder:cuda116"
raise ValueError("Unknown cuda version")
def pytorch_versions_for_python(python_version): def pytorch_versions_for_python(python_version):
if python_version in ["3.6", "3.7", "3.8"]: if python_version in ["3.7", "3.8"]:
return list(CONDA_CUDA_VERSIONS) return list(CONDA_CUDA_VERSIONS)
pytorch_without_py39 = ["1.4", "1.5.0", "1.5.1", "1.6.0", "1.7.0"] if python_version == "3.9":
return [i for i in CONDA_CUDA_VERSIONS if i not in pytorch_without_py39] return [
i
for i in CONDA_CUDA_VERSIONS
if version.Version(i) > version.Version("1.7.0")
]
if python_version == "3.10":
return [
i
for i in CONDA_CUDA_VERSIONS
if version.Version(i) >= version.Version("1.11.0")
]
def workflows(prefix="", filter_branch=None, upload=False, indentation=6): def workflows(prefix="", filter_branch=None, upload=False, indentation=6):
w = [] w = []
for btype in ["conda"]: for btype in ["conda"]:
for python_version in ["3.6", "3.7", "3.8", "3.9"]: for python_version in ["3.7", "3.8", "3.9", "3.10"]:
for pytorch_version in pytorch_versions_for_python(python_version): for pytorch_version in pytorch_versions_for_python(python_version):
for cu_version in CONDA_CUDA_VERSIONS[pytorch_version]: for cu_version in CONDA_CUDA_VERSIONS[pytorch_version]:
w += workflow_pair( w += workflow_pair(
@@ -115,6 +138,10 @@ def generate_base_workflow(
"context": "DOCKERHUB_TOKEN", "context": "DOCKERHUB_TOKEN",
} }
conda_docker_image = conda_docker_image_for_cuda(cu_version)
if conda_docker_image is not None:
d["conda_docker_image"] = conda_docker_image
if filter_branch is not None: if filter_branch is not None:
d["filters"] = {"branches": {"only": filter_branch}} d["filters"] = {"branches": {"only": filter_branch}}

View File

@@ -46,7 +46,7 @@ outlined on that page and do not file a public issue.
## Coding Style ## Coding Style
We follow these [python](http://google.github.io/styleguide/pyguide.html) and [C++](https://google.github.io/styleguide/cppguide.html) style guides. We follow these [python](http://google.github.io/styleguide/pyguide.html) and [C++](https://google.github.io/styleguide/cppguide.html) style guides.
For the linter to work, you will need to install `black`, `flake`, `isort` and `clang-format`, and For the linter to work, you will need to install `black`, `flake`, `usort` and `clang-format`, and
they need to be fairly up to date. they need to be fairly up to date.
## License ## License

View File

@@ -9,7 +9,7 @@ The core library is written in PyTorch. Several components have underlying imple
- Linux or macOS or Windows - Linux or macOS or Windows
- Python 3.6, 3.7, 3.8 or 3.9 - Python 3.6, 3.7, 3.8 or 3.9
- PyTorch 1.4, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1 or 1.9.0. - PyTorch 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0 or 1.12.0.
- torchvision that matches the PyTorch installation. You can install them together as explained at pytorch.org to make sure of this. - torchvision that matches the PyTorch installation. You can install them together as explained at pytorch.org to make sure of this.
- gcc & g++ ≥ 4.9 - gcc & g++ ≥ 4.9
- [fvcore](https://github.com/facebookresearch/fvcore) - [fvcore](https://github.com/facebookresearch/fvcore)
@@ -19,9 +19,9 @@ The core library is written in PyTorch. Several components have underlying imple
The runtime dependencies can be installed by running: The runtime dependencies can be installed by running:
``` ```
conda create -n pytorch3d python=3.8 conda create -n pytorch3d python=3.9
conda activate pytorch3d conda activate pytorch3d
conda install -c pytorch pytorch=1.7.1 torchvision cudatoolkit=10.2 conda install -c pytorch pytorch=1.9.1 torchvision cudatoolkit=10.2
conda install -c fvcore -c iopath -c conda-forge fvcore iopath conda install -c fvcore -c iopath -c conda-forge fvcore iopath
``` ```
@@ -43,7 +43,7 @@ export CUB_HOME=$PWD/cub-1.10.0
For developing on top of PyTorch3D or contributing, you will need to run the linter and tests. If you want to run any of the notebook tutorials as `docs/tutorials` or the examples in `docs/examples` you will also need matplotlib and OpenCV. For developing on top of PyTorch3D or contributing, you will need to run the linter and tests. If you want to run any of the notebook tutorials as `docs/tutorials` or the examples in `docs/examples` you will also need matplotlib and OpenCV.
- scikit-image - scikit-image
- black - black
- isort - usort
- flake8 - flake8
- matplotlib - matplotlib
- tdqm - tdqm
@@ -59,7 +59,7 @@ conda install jupyter
pip install scikit-image matplotlib imageio plotly opencv-python pip install scikit-image matplotlib imageio plotly opencv-python
# Tests/Linting # Tests/Linting
pip install black 'isort<5' flake8 flake8-bugbear flake8-comprehensions pip install black usort flake8 flake8-bugbear flake8-comprehensions
``` ```
## Installing prebuilt binaries for PyTorch3D ## Installing prebuilt binaries for PyTorch3D
@@ -78,30 +78,31 @@ Or, to install a nightly (non-official, alpha) build:
conda install pytorch3d -c pytorch3d-nightly conda install pytorch3d -c pytorch3d-nightly
``` ```
### 2. Install from PyPI, on Mac only. ### 2. Install from PyPI, on Mac only.
This works with pytorch 1.9.0 only. The build is CPU only. This works with pytorch 1.12.0 only. The build is CPU only.
``` ```
pip install pytorch3d pip install pytorch3d
``` ```
### 3. Install wheels for Linux ### 3. Install wheels for Linux
We have prebuilt wheels with CUDA for Linux for PyTorch 1.9.0, for each of the CUDA versions that they support, We have prebuilt wheels with CUDA for Linux for PyTorch 1.11.0, for each of the supported CUDA versions,
for Python 3.7, 3.8 and 3.9. for Python 3.7, 3.8 and 3.9. This is for ease of use on Google Colab.
These are installed in a special way. These are installed in a special way.
For example, to install for Python 3.8, PyTorch 1.9.0 and CUDA 10.2 For example, to install for Python 3.8, PyTorch 1.11.0 and CUDA 11.3
``` ```
pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu102_pyt190/download.html pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1110/download.html
``` ```
In general, from inside IPython, or in Google Colab or a jupyter notebook, you can install with In general, from inside IPython, or in Google Colab or a jupyter notebook, you can install with
``` ```
import sys import sys
import torch import torch
pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
version_str="".join([ version_str="".join([
f"py3{sys.version_info.minor}_cu", f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""), torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}" f"_pyt{pyt_version_str}"
]) ])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
``` ```
## Building / installing from source. ## Building / installing from source.
@@ -146,10 +147,10 @@ After any necessary patching, you can go to "x64 Native Tools Command Prompt for
cd pytorch3d cd pytorch3d
python3 setup.py install python3 setup.py install
``` ```
After installing, verify whether all unit tests have passed
After installing, you can run **unit tests**
``` ```
cd tests python3 -m unittest discover -v -s tests -t .
python3 -m unittest discover -p *.py
``` ```
# FAQ # FAQ

View File

@@ -2,7 +2,7 @@ BSD License
For PyTorch3D software For PyTorch3D software
Copyright (c) Facebook, Inc. and its affiliates. All rights reserved. Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met: are permitted provided that the following conditions are met:
@@ -14,7 +14,7 @@ are permitted provided that the following conditions are met:
this list of conditions and the following disclaimer in the documentation this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution. and/or other materials provided with the distribution.
* Neither the name Facebook nor the names of its contributors may be used to * Neither the name Meta nor the names of its contributors may be used to
endorse or promote products derived from this software without specific endorse or promote products derived from this software without specific
prior written permission. prior written permission.

71
LICENSE-3RD-PARTY Normal file
View File

@@ -0,0 +1,71 @@
SRN license ( https://github.com/vsitzmann/scene-representation-networks/ ):
MIT License
Copyright (c) 2019 Vincent Sitzmann
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
IDR license ( github.com/lioryariv/idr ):
MIT License
Copyright (c) 2020 Lior Yariv
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
NeRF https://github.com/bmild/nerf/
Copyright (c) 2020 bmild
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,4 +1,4 @@
<img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/.github/pytorch3dlogo.png" width="900"/> <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/pytorch3dlogo.png" width="900"/>
[![CircleCI](https://circleci.com/gh/facebookresearch/pytorch3d.svg?style=svg)](https://circleci.com/gh/facebookresearch/pytorch3d) [![CircleCI](https://circleci.com/gh/facebookresearch/pytorch3d.svg?style=svg)](https://circleci.com/gh/facebookresearch/pytorch3d)
[![Anaconda-Server Badge](https://anaconda.org/pytorch3d/pytorch3d/badges/version.svg)](https://anaconda.org/pytorch3d/pytorch3d) [![Anaconda-Server Badge](https://anaconda.org/pytorch3d/pytorch3d/badges/version.svg)](https://anaconda.org/pytorch3d/pytorch3d)
@@ -35,25 +35,25 @@ PyTorch3D is released under the [BSD License](LICENSE).
Get started with PyTorch3D by trying one of the tutorial notebooks. Get started with PyTorch3D by trying one of the tutorial notebooks.
|<img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/.github/dolphin_deform.gif" width="310"/>|<img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/.github/bundle_adjust.gif" width="310"/>| |<img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/dolphin_deform.gif" width="310"/>|<img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/bundle_adjust.gif" width="310"/>|
|:-----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:| |:-----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|
| [Deform a sphere mesh to dolphin](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/deform_source_mesh_to_target_mesh.ipynb)| [Bundle adjustment](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/bundle_adjustment.ipynb) | | [Deform a sphere mesh to dolphin](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/deform_source_mesh_to_target_mesh.ipynb)| [Bundle adjustment](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/bundle_adjustment.ipynb) |
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/.github/render_textured_mesh.gif" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/.github/camera_position_teapot.gif" width="310" height="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/render_textured_mesh.gif" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/camera_position_teapot.gif" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:| |:------------------------------------------------------------:|:--------------------------------------------------:|
| [Render textured meshes](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/render_textured_meshes.ipynb)| [Camera position optimization](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/camera_position_optimization_with_differentiable_rendering.ipynb)| | [Render textured meshes](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/render_textured_meshes.ipynb)| [Camera position optimization](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/camera_position_optimization_with_differentiable_rendering.ipynb)|
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/.github/pointcloud_render.png" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/.github/cow_deform.gif" width="310" height="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/pointcloud_render.png" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/cow_deform.gif" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:| |:------------------------------------------------------------:|:--------------------------------------------------:|
| [Render textured pointclouds](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/render_colored_points.ipynb)| [Fit a mesh with texture](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/fit_textured_mesh.ipynb)| | [Render textured pointclouds](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/render_colored_points.ipynb)| [Fit a mesh with texture](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/fit_textured_mesh.ipynb)|
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/.github/densepose_render.png" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/.github/shapenet_render.png" width="310" height="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/densepose_render.png" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/shapenet_render.png" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:| |:------------------------------------------------------------:|:--------------------------------------------------:|
| [Render DensePose data](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/render_densepose.ipynb)| [Load & Render ShapeNet data](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/dataloaders_ShapeNetCore_R2N2.ipynb)| | [Render DensePose data](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/render_densepose.ipynb)| [Load & Render ShapeNet data](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/dataloaders_ShapeNetCore_R2N2.ipynb)|
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/.github/fit_textured_volume.gif" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/.github/fit_nerf.gif" width="310" height="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/fit_textured_volume.gif" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/fit_nerf.gif" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:| |:------------------------------------------------------------:|:--------------------------------------------------:|
| [Fit Textured Volume](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/fit_textured_volume.ipynb)| [Fit A Simple Neural Radiance Field](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/fit_simple_neural_radiance_field.ipynb)| | [Fit Textured Volume](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/fit_textured_volume.ipynb)| [Fit A Simple Neural Radiance Field](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/fit_simple_neural_radiance_field.ipynb)|
@@ -64,9 +64,9 @@ Learn more about the API by reading the PyTorch3D [documentation](https://pytorc
We also have deep dive notes on several API components: We also have deep dive notes on several API components:
- [Heterogeneous Batching](https://github.com/facebookresearch/pytorch3d/tree/master/docs/notes/batching.md) - [Heterogeneous Batching](https://github.com/facebookresearch/pytorch3d/tree/main/docs/notes/batching.md)
- [Mesh IO](https://github.com/facebookresearch/pytorch3d/tree/master/docs/notes/meshes_io.md) - [Mesh IO](https://github.com/facebookresearch/pytorch3d/tree/main/docs/notes/meshes_io.md)
- [Differentiable Rendering](https://github.com/facebookresearch/pytorch3d/tree/master/docs/notes/renderer_getting_started.md) - [Differentiable Rendering](https://github.com/facebookresearch/pytorch3d/tree/main/docs/notes/renderer_getting_started.md)
### Overview Video ### Overview Video
@@ -78,6 +78,13 @@ We have created a short (~14 min) video tutorial providing an overview of the Py
We welcome new contributions to PyTorch3D and we will be actively maintaining this library! Please refer to [CONTRIBUTING.md](./.github/CONTRIBUTING.md) for full instructions on how to run the code, tests and linter, and submit your pull requests. We welcome new contributions to PyTorch3D and we will be actively maintaining this library! Please refer to [CONTRIBUTING.md](./.github/CONTRIBUTING.md) for full instructions on how to run the code, tests and linter, and submit your pull requests.
## Development and Compatibility
- `main` branch: actively developed, without any guarantee, Anything can be broken at any time
- REMARK: this includes nightly builds which are built from `main`
- HINT: the commit history can help locate regressions or changes
- backward-compatibility between releases: no guarantee. Best efforts to communicate breaking changes and facilitate migration of code or data (incl. models).
## Contributors ## Contributors
PyTorch3D is written and maintained by the Facebook AI Research Computer Vision Team. PyTorch3D is written and maintained by the Facebook AI Research Computer Vision Team.
@@ -90,7 +97,7 @@ In alphabetical order:
* Georgia Gkioxari * Georgia Gkioxari
* Taylor Gordon * Taylor Gordon
* Justin Johnson * Justin Johnson
* Patrick Labtut * Patrick Labatut
* Christoph Lassner * Christoph Lassner
* Wan-Yen Lo * Wan-Yen Lo
* David Novotny * David Novotny
@@ -129,7 +136,13 @@ If you are using the pulsar backend for sphere-rendering (the `PulsarPointRender
Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under [`Releases`](https://github.com/facebookresearch/pytorch3d/releases), and the builds can be installed using `conda` as per the instructions in [INSTALL.md](INSTALL.md). Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under [`Releases`](https://github.com/facebookresearch/pytorch3d/releases), and the builds can be installed using `conda` as per the instructions in [INSTALL.md](INSTALL.md).
**[Feb 9th 2021]:** PyTorch3D [v0.4.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.4.0) released with support for implicit functions, volume rendering and a [reimplementation of NeRF](https://github.com/facebookresearch/pytorch3d/tree/master/projects/nerf). **[Dec 16th 2021]:** PyTorch3D [v0.6.1](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.6.1) released
**[Oct 6th 2021]:** PyTorch3D [v0.6.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.6.0) released
**[Aug 5th 2021]:** PyTorch3D [v0.5.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.5.0) released
**[Feb 9th 2021]:** PyTorch3D [v0.4.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.4.0) released with support for implicit functions, volume rendering and a [reimplementation of NeRF](https://github.com/facebookresearch/pytorch3d/tree/main/projects/nerf).
**[November 2nd 2020]:** PyTorch3D [v0.3.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.3.0) released, integrating the pulsar backend. **[November 2nd 2020]:** PyTorch3D [v0.3.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.3.0) released, integrating the pulsar backend.

View File

@@ -1,5 +1,5 @@
#!/bin/bash -e #!/bin/bash -e
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
@@ -7,20 +7,17 @@
# Run this script at project root by "./dev/linter.sh" before you commit # Run this script at project root by "./dev/linter.sh" before you commit
{
V=$(black --version|cut '-d ' -f3)
code='import distutils.version; assert "19.3" < distutils.version.LooseVersion("'$V'")'
python -c "${code}" 2> /dev/null
} || {
echo "Linter requires black 19.3b0 or higher!"
exit 1
}
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
DIR=$(dirname "${DIR}") DIR=$(dirname "${DIR}")
echo "Running isort..." if [[ -f "${DIR}/TARGETS" ]]
isort -y -sp "${DIR}" then
pyfmt "${DIR}"
else
# run usort externally only
echo "Running usort..."
usort "${DIR}"
fi
echo "Running black..." echo "Running black..."
black "${DIR}" black "${DIR}"
@@ -33,7 +30,7 @@ clangformat=$(command -v clang-format-8 || echo clang-format)
find "${DIR}" -regex ".*\.\(cpp\|c\|cc\|cu\|cuh\|cxx\|h\|hh\|hpp\|hxx\|tcc\|mm\|m\)" -print0 | xargs -0 "${clangformat}" -i find "${DIR}" -regex ".*\.\(cpp\|c\|cc\|cu\|cuh\|cxx\|h\|hh\|hpp\|hxx\|tcc\|mm\|m\)" -print0 | xargs -0 "${clangformat}" -i
# Run arc and pyre internally only. # Run arc and pyre internally only.
if [[ -f "${DIR}/tests/TARGETS" ]] if [[ -f "${DIR}/TARGETS" ]]
then then
(cd "${DIR}"; command -v arc > /dev/null && arc lint) || true (cd "${DIR}"; command -v arc > /dev/null && arc lint) || true

View File

@@ -1,5 +1,5 @@
#!/usr/bin/bash #!/usr/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

64
dev/test_list.py Normal file
View File

@@ -0,0 +1,64 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
import ast
from pathlib import Path
from typing import List
"""
This module outputs a list of tests for completion.
It has no dependencies.
"""
def get_test_files() -> List[Path]:
root = Path(__file__).parent.parent
return list((root / "tests").glob("**/test*.py"))
def tests_from_file(path: Path, base: str) -> List[str]:
"""
Returns all the tests in the given file, in format
expected as arguments when running the tests.
e.g.
file_stem
file_stem.TestFunctionality
file_stem.TestFunctionality.test_f
file_stem.TestFunctionality.test_g
"""
with open(path) as f:
node = ast.parse(f.read())
out = [base]
for cls in node.body:
if not isinstance(cls, ast.ClassDef):
continue
if not cls.name.startswith("Test"):
continue
class_base = base + "." + cls.name
out.append(class_base)
for method in cls.body:
if not isinstance(method, ast.FunctionDef):
continue
if not method.name.startswith("test"):
continue
out.append(class_base + "." + method.name)
return out
def main() -> None:
files = get_test_files()
test_root = Path(__file__).parent.parent
all_tests = []
for f in files:
file_base = str(f.relative_to(test_root))[:-3].replace("/", ".")
all_tests.extend(tests_from_file(f, file_base))
for test in sorted(all_tests):
print(test)
if __name__ == "__main__":
main()

View File

@@ -1,4 +1,8 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
# Minimal makefile for Sphinx documentation # Minimal makefile for Sphinx documentation

View File

@@ -1,5 +1,5 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
@@ -82,11 +82,11 @@ for m in ["cv2", "scipy", "numpy", "pytorch3d._C", "np.eye", "np.zeros"]:
# -- Project information ----------------------------------------------------- # -- Project information -----------------------------------------------------
project = "PyTorch3D" project = "PyTorch3D"
copyright = "2019, facebookresearch" copyright = "Meta Platforms, Inc"
author = "facebookresearch" author = "facebookresearch"
# The short X.Y version # The short X.Y version
version = "0.2.0" version = ""
# The full version, including alpha/beta/rc tags # The full version, including alpha/beta/rc tags
release = version release = version
@@ -159,7 +159,7 @@ html_theme_options = {"collapse_navigation": True}
def url_resolver(url): def url_resolver(url):
if ".html" not in url: if ".html" not in url:
url = url.replace("../", "") url = url.replace("../", "")
return "https://github.com/facebookresearch/pytorch3d/blob/master/" + url return "https://github.com/facebookresearch/pytorch3d/blob/main/" + url
else: else:
if DEPLOY: if DEPLOY:
return "http://pytorch3d.readthedocs.io/" + url return "http://pytorch3d.readthedocs.io/" + url

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -3,7 +3,6 @@ API Documentation
.. toctree:: .. toctree::
common
structures structures
io io
loss loss
@@ -12,3 +11,5 @@ API Documentation
transforms transforms
utils utils
datasets datasets
common
vis

6
docs/modules/vis.rst Normal file
View File

@@ -0,0 +1,6 @@
pytorch3d.vis
===========================
.. automodule:: pytorch3d.vis
:members:
:undoc-members:

BIN
docs/notes/assets/iou3d.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -26,7 +26,7 @@ The need for different mesh batch modes is inherent to the way PyTorch operators
<img src="assets/meshrcnn.png" alt="meshrcnn" width="700" align="middle" /> <img src="assets/meshrcnn.png" alt="meshrcnn" width="700" align="middle" />
[meshes]: https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py [meshes]: https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/structures/meshes.py
[graphconv]: https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/graph_conv.py [graphconv]: https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/ops/graph_conv.py
[vert_align]: https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/vert_align.py [vert_align]: https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/ops/vert_align.py
[meshrcnn]: https://github.com/facebookresearch/meshrcnn [meshrcnn]: https://github.com/facebookresearch/meshrcnn

View File

@@ -13,13 +13,14 @@ This is the system the object/scene lives - the world.
* **Camera view coordinate system** * **Camera view coordinate system**
This is the system that has its origin on the image plane and the `Z`-axis perpendicular to the image plane. In PyTorch3D, we assume that `+X` points left, and `+Y` points up and `+Z` points out from the image plane. The transformation from world to view happens after applying a rotation (`R`) and translation (`T`). This is the system that has its origin on the image plane and the `Z`-axis perpendicular to the image plane. In PyTorch3D, we assume that `+X` points left, and `+Y` points up and `+Z` points out from the image plane. The transformation from world to view happens after applying a rotation (`R`) and translation (`T`).
* **NDC coordinate system** * **NDC coordinate system**
This is the normalized coordinate system that confines in a volume the rendered part of the object/scene. Also known as view volume. Under the PyTorch3D convention, `(+1, +1, znear)` is the top left near corner, and `(-1, -1, zfar)` is the bottom right far corner of the volume. For non-square volumes, the side of the volume in `XY` with the smallest length ranges from `[-1, 1]` while the larger side from `[-s, s]`, where `s` is the aspect ratio and `s > 1` (larger divided by smaller side). This is the normalized coordinate system that confines in a volume the rendered part of the object/scene. Also known as view volume. For square images, under the PyTorch3D convention, `(+1, +1, znear)` is the top left near corner, and `(-1, -1, zfar)` is the bottom right far corner of the volume. For non-square images, the side of the volume in `XY` with the smallest length ranges from `[-1, 1]` while the larger side from `[-s, s]`, where `s` is the aspect ratio and `s > 1` (larger divided by smaller side).
The transformation from view to NDC happens after applying the camera projection matrix (`P`). The transformation from view to NDC happens after applying the camera projection matrix (`P`).
* **Screen coordinate system** * **Screen coordinate system**
This is another representation of the view volume with the `XY` coordinates defined in pixel space instead of a normalized space. This is another representation of the view volume with the `XY` coordinates defined in pixel space instead of a normalized space. (0,0) is the top left corner of the top left pixel
and (W,H) is the bottom right corner of the bottom right pixel.
An illustration of the 4 coordinate systems is shown below An illustration of the 4 coordinate systems is shown below
![cameras](https://user-images.githubusercontent.com/4369065/90317960-d9b8db80-dee1-11ea-8088-39c414b1e2fa.png) ![cameras](https://user-images.githubusercontent.com/669761/145090051-67b506d7-6d73-4826-a677-5873b7cb92ba.png)
## Defining Cameras in PyTorch3D ## Defining Cameras in PyTorch3D
@@ -44,7 +45,7 @@ All cameras inherit from `CamerasBase` which is a base class for all cameras. Py
* `transform_points` which takes a set of input points in world coordinates and projects to NDC coordinates ranging from [-1, -1, znear] to [+1, +1, zfar]. * `transform_points` which takes a set of input points in world coordinates and projects to NDC coordinates ranging from [-1, -1, znear] to [+1, +1, zfar].
* `get_ndc_camera_transform` which defines the conversion to PyTorch3D's NDC space and is called when interfacing with the PyTorch3D renderer. If the camera is defined in NDC space, then the identity transform is returned. If the cameras is defined in screen space, the conversion from screen to NDC is returned. If users define their own camera in screen space, they need to think of the screen to NDC conversion. We provide examples for the `PerspectiveCameras` and `OrthographicCameras`. * `get_ndc_camera_transform` which defines the conversion to PyTorch3D's NDC space and is called when interfacing with the PyTorch3D renderer. If the camera is defined in NDC space, then the identity transform is returned. If the cameras is defined in screen space, the conversion from screen to NDC is returned. If users define their own camera in screen space, they need to think of the screen to NDC conversion. We provide examples for the `PerspectiveCameras` and `OrthographicCameras`.
* `transform_points_ndc` which takes a set of points in world coordinates and projects them to PyTorch3D's NDC space * `transform_points_ndc` which takes a set of points in world coordinates and projects them to PyTorch3D's NDC space
* `transform_points_screen` which takes a set of input points in world coordinates and projects them to the screen coordinates ranging from [0, 0, znear] to [W-1, H-1, zfar] * `transform_points_screen` which takes a set of input points in world coordinates and projects them to the screen coordinates ranging from [0, 0, znear] to [W, H, zfar]
Users can easily customize their own cameras. For each new camera, users should implement the `get_projection_transform` routine that returns the mapping `P` from camera view coordinates to NDC coordinates. Users can easily customize their own cameras. For each new camera, users should implement the `get_projection_transform` routine that returns the mapping `P` from camera view coordinates to NDC coordinates.
@@ -83,8 +84,8 @@ cameras_ndc = PerspectiveCameras(focal_length=fcl_ndc, principal_point=prp_ndc)
# Screen space camera # Screen space camera
image_size = ((128, 256),) # (h, w) image_size = ((128, 256),) # (h, w)
fcl_screen = (76.2,) # fcl_ndc * (min(image_size) - 1) / 2 fcl_screen = (76.8,) # fcl_ndc * min(image_size) / 2
prp_screen = ((114.8, 31.75), ) # (w - 1) / 2 - px_ndc * (min(image_size) - 1) / 2, (h - 1) / 2 - py_ndc * (min(image_size) - 1) / 2 prp_screen = ((115.2, 48), ) # w / 2 - px_ndc * min(image_size) / 2, h / 2 - py_ndc * min(image_size) / 2
cameras_screen = PerspectiveCameras(focal_length=fcl_screen, principal_point=prp_screen, in_ndc=False, image_size=image_size) cameras_screen = PerspectiveCameras(focal_length=fcl_screen, principal_point=prp_screen, in_ndc=False, image_size=image_size)
``` ```
@@ -92,9 +93,9 @@ The relationship between screen and NDC specifications of a camera's `focal_leng
The transformation of x and y coordinates between screen and NDC is exactly the same as for px and py. The transformation of x and y coordinates between screen and NDC is exactly the same as for px and py.
``` ```
fx_ndc = fx_screen * 2.0 / (s - 1) fx_ndc = fx_screen * 2.0 / s
fy_ndc = fy_screen * 2.0 / (s - 1) fy_ndc = fy_screen * 2.0 / s
px_ndc = - (px_screen - (image_width - 1) / 2.0) * 2.0 / (s - 1) px_ndc = - (px_screen - image_width / 2.0) * 2.0 / s
py_ndc = - (py_screen - (image_height - 1) / 2.0) * 2.0 / (s - 1) py_ndc = - (py_screen - image_height / 2.0) * 2.0 / s
``` ```

View File

@@ -5,7 +5,7 @@ sidebar_label: Cubify
# Cubify # Cubify
The [cubify operator](https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/ops/cubify.py) converts an 3D occupancy grid of shape `BxDxHxW`, where `B` is the batch size, into a mesh instantiated as a [Meshes](https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py) data structure of `B` elements. The operator replaces every occupied voxel (if its occupancy probability is greater than a user defined threshold) with a cuboid of 12 faces and 8 vertices. Shared vertices are merged, and internal faces are removed resulting in a **watertight** mesh. The [cubify operator](https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/ops/cubify.py) converts an 3D occupancy grid of shape `BxDxHxW`, where `B` is the batch size, into a mesh instantiated as a [Meshes](https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/structures/meshes.py) data structure of `B` elements. The operator replaces every occupied voxel (if its occupancy probability is greater than a user defined threshold) with a cuboid of 12 faces and 8 vertices. Shared vertices are merged, and internal faces are removed resulting in a **watertight** mesh.
The operator provides three alignment modes {*topleft*, *corner*, *center*} which define the span of the mesh vertices with respect to the voxel grid. The alignment modes are described in the figure below for a 2D grid. The operator provides three alignment modes {*topleft*, *corner*, *center*} which define the span of the mesh vertices with respect to the voxel grid. The alignment modes are described in the figure below for a 2D grid.

View File

@@ -9,12 +9,12 @@ sidebar_label: Data loaders
ShapeNet is a dataset of 3D CAD models. ShapeNetCore is a subset of the ShapeNet dataset and can be downloaded from https://www.shapenet.org/. There are two versions ShapeNetCore: v1 (55 categories) and v2 (57 categories). ShapeNet is a dataset of 3D CAD models. ShapeNetCore is a subset of the ShapeNet dataset and can be downloaded from https://www.shapenet.org/. There are two versions ShapeNetCore: v1 (55 categories) and v2 (57 categories).
The PyTorch3D [ShapeNetCore data loader](https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/datasets/shapenet/shapenet_core.py) inherits from `torch.utils.data.Dataset`. It takes the path where the ShapeNetCore dataset is stored locally and loads models in the dataset. The ShapeNetCore class loads and returns models with their `categories`, `model_ids`, `vertices` and `faces`. The `ShapeNetCore` data loader also has a customized `render` function that renders models by the specified `model_ids (List[int])`, `categories (List[str])` or `indices (List[int])` with PyTorch3D's differentiable renderer. The PyTorch3D [ShapeNetCore data loader](https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/datasets/shapenet/shapenet_core.py) inherits from `torch.utils.data.Dataset`. It takes the path where the ShapeNetCore dataset is stored locally and loads models in the dataset. The ShapeNetCore class loads and returns models with their `categories`, `model_ids`, `vertices` and `faces`. The `ShapeNetCore` data loader also has a customized `render` function that renders models by the specified `model_ids (List[int])`, `categories (List[str])` or `indices (List[int])` with PyTorch3D's differentiable renderer.
The loaded dataset can be passed to `torch.utils.data.DataLoader` with PyTorch3D's customized collate_fn: `collate_batched_meshes` from the `pytorch3d.dataset.utils` module. The `vertices` and `faces` of the models are used to construct a [Meshes](https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/structures/meshes.py) object representing the batched meshes. This `Meshes` representation can be easily used with other ops and rendering in PyTorch3D. The loaded dataset can be passed to `torch.utils.data.DataLoader` with PyTorch3D's customized collate_fn: `collate_batched_meshes` from the `pytorch3d.dataset.utils` module. The `vertices` and `faces` of the models are used to construct a [Meshes](https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/structures/meshes.py) object representing the batched meshes. This `Meshes` representation can be easily used with other ops and rendering in PyTorch3D.
### R2N2 ### R2N2
The R2N2 dataset contains 13 categories that are a subset of the ShapeNetCore v.1 dataset. The R2N2 dataset also contains its own 24 renderings of each object and voxelized models. The R2N2 Dataset can be downloaded following the instructions [here](http://3d-r2n2.stanford.edu/). The R2N2 dataset contains 13 categories that are a subset of the ShapeNetCore v.1 dataset. The R2N2 dataset also contains its own 24 renderings of each object and voxelized models. The R2N2 Dataset can be downloaded following the instructions [here](http://3d-r2n2.stanford.edu/).
The PyTorch3D [R2N2 data loader](https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/datasets/r2n2/r2n2.py) is initialized with the paths to the ShapeNet dataset, the R2N2 dataset and the splits file for R2N2. Just like `ShapeNetCore`, it can be passed to `torch.utils.data.DataLoader` with a customized collate_fn: `collate_batched_R2N2` from the `pytorch3d.dataset.r2n2.utils` module. It returns all the data that `ShapeNetCore` returns, and in addition, it returns the R2N2 renderings (24 views for each model) along with the camera calibration matrices and a voxel representation for each model. Similar to `ShapeNetCore`, it has a customized `render` function that supports rendering specified models with the PyTorch3D differentiable renderer. In addition, it supports rendering models with the same orientations as R2N2's original renderings. The PyTorch3D [R2N2 data loader](https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/datasets/r2n2/r2n2.py) is initialized with the paths to the ShapeNet dataset, the R2N2 dataset and the splits file for R2N2. Just like `ShapeNetCore`, it can be passed to `torch.utils.data.DataLoader` with a customized collate_fn: `collate_batched_R2N2` from the `pytorch3d.dataset.r2n2.utils` module. It returns all the data that `ShapeNetCore` returns, and in addition, it returns the R2N2 renderings (24 views for each model) along with the camera calibration matrices and a voxel representation for each model. Similar to `ShapeNetCore`, it has a customized `render` function that supports rendering specified models with the PyTorch3D differentiable renderer. In addition, it supports rendering models with the same orientations as R2N2's original renderings.

93
docs/notes/iou3d.md Normal file
View File

@@ -0,0 +1,93 @@
---
hide_title: true
sidebar_label: IoU3D
---
# Intersection Over Union of Oriented 3D Boxes: A New Algorithm
Author: Georgia Gkioxari
Implementation: Georgia Gkioxari and Nikhila Ravi
## Description
Intersection over union (IoU) of boxes is widely used as an evaluation metric in object detection ([1][pascalvoc], [2][coco]).
In 2D, IoU is commonly applied to axis-aligned boxes, namely boxes with edges parallel to the image axis.
In 3D, boxes are usually not axis aligned and can be oriented in any way in the world.
We introduce a new algorithm which computes the *exact* IoU of two *oriented 3D boxes*.
Our algorithm is based on the simple observation that the intersection of two oriented 3D boxes, `box1` and `box2`, is a convex polyhedron (convex n-gon in 2D) with `n > 2` comprised of connected *planar units*.
In 3D, these planar units are 3D triangular faces.
In 2D, they are 2D edges.
Each planar unit belongs strictly to either `box1` or `box2`.
Our algorithm finds these units by iterating through the sides of each box.
1. For each 3D triangular face `e` in `box1` we check wether `e` is *inside* `box2`.
2. If `e` is not *inside*, then we discard it.
3. If `e` is *inside* or *partially inside*, then the part of `e` *inside* `box2` is added to the units that comprise the final intersection shape.
4. We repeat for `box2`.
Below, we show a visualization of our algorithm for the case of 2D oriented boxes.
<p align="center">
<img src="assets/iou3d.gif" alt="drawing" width="400"/>
</p>
Note that when a box's unit `e` is *partially inside* a `box` then `e` breaks into smaller units. In 2D, `e` is an edge and breaks into smaller edges. In 3D, `e` is a 3D triangular face and is clipped to more and smaller faces by the plane of the `box` it intersects with.
This is the sole fundamental difference between the algorithms for 2D and 3D.
## Comparison With Other Algorithms
Current algorithms for 3D box IoU rely on crude approximations or make box assumptions, for example they restrict the orientation of the 3D boxes.
[Objectron][objectron] provides a nice discussion on the limitations of prior works.
[Objectron][objectron] introduces a great algorithm for exact IoU computation of oriented 3D boxes.
Objectron's algorithm computes the intersection points of two boxes using the [Sutherland-Hodgman algorithm][clipalgo].
The intersection shape is formed by the convex hull from the intersection points, using the [Qhull library][qhull].
Our algorithm has several advantages over Objectron's:
* Our algorithm also computes the points of intersection, similar to Objectron, but in addition stores the *planar units* the points belong to. This eliminates the need for convex hull computation which is `O(nlogn)` and relies on a third party library which often crashes with nondescript error messages.
* Objectron's implementation assumes that boxes are a rotation away from axis aligned. Our algorithm and implementation make no such assumption and work for any 3D boxes.
* Our implementation supports batching, unlike Objectron which assumes single element inputs for `box1` and `box2`.
* Our implementation is easily parallelizable and in fact we provide a custom C++/CUDA implementation which is **450 times faster than Objectron**.
Below we compare the performance for Objectron (in C++) and our algorithm, in C++ and CUDA. We benchmark for a common use case in object detection where `boxes1` hold M predictions and `boxes2` hold N ground truth 3D boxes in an image and compute the `MxN` IoU matrix. We report the time in ms for `M=N=16`.
<p align="center">
<img src="assets/iou3d_comp.png" alt="drawing" width="400"/>
</p>
## Usage and Code
```python
from pytorch3d.ops import box3d_overlap
# Assume inputs: boxes1 (M, 8, 3) and boxes2 (N, 8, 3)
intersection_vol, iou_3d = box3d_overal(boxes1, boxes2)
```
For more details, read [iou_box3d.py](https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/ops/iou_box3d.py).
Note that our implementation is not differentiable as of now. We plan to add gradient support soon.
We also include have extensive [tests](https://github.com/facebookresearch/pytorch3d/blob/main/tests/test_iou_box3d.py) comparing our implementation with Objectron and MeshLab.
## Cite
If you use our 3D IoU algorithm, please cite PyTorch3D
```bibtex
@article{ravi2020pytorch3d,
author = {Nikhila Ravi and Jeremy Reizenstein and David Novotny and Taylor Gordon
and Wan-Yen Lo and Justin Johnson and Georgia Gkioxari},
title = {Accelerating 3D Deep Learning with PyTorch3D},
journal = {arXiv:2007.08501},
year = {2020},
}
```
[pascalvoc]: http://host.robots.ox.ac.uk/pascal/VOC/
[coco]: https://cocodataset.org/
[objectron]: https://arxiv.org/abs/2012.09988
[qhull]: http://www.qhull.org/
[clipalgo]: https://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm

View File

@@ -21,7 +21,7 @@ Our implementation decouples the rasterization and shading steps of rendering. T
## <u>Get started</u> ## <u>Get started</u>
To learn about more the implementation and start using the renderer refer to [getting started with renderer](renderer_getting_started.md), which also contains the [architecture overview](assets/architecture_overview.png) and [coordinate transformation conventions](assets/transformations_overview.png). To learn about more the implementation and start using the renderer refer to [getting started with renderer](renderer_getting_started.md), which also contains the [architecture overview](assets/architecture_renderer.jpg) and [coordinate transformation conventions](assets/transforms_overview.jpg).
## <u>Tech Report</u> ## <u>Tech Report</u>

View File

@@ -74,7 +74,7 @@ Since v0.3, [pulsar](https://arxiv.org/abs/2004.07484) can be used as a backend
<img align="center" src="assets/pulsar_bm.png" width="300"> <img align="center" src="assets/pulsar_bm.png" width="300">
Pulsar's processing steps are tightly integrated CUDA kernels and do not work with custom `rasterizer` and `compositor` components. We provide two ways to use Pulsar: (1) there is a unified interface to match the PyTorch3D calling convention seamlessly. This is, for example, illustrated in the [point cloud tutorial](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/render_colored_points.ipynb). (2) There is a direct interface available to the pulsar backend, which exposes the full functionality of the backend (including opacity, which is not yet available in PyTorch3D). Examples showing its use as well as the matching PyTorch3D interface code are available in [this folder](https://github.com/facebookresearch/pytorch3d/tree/master/docs/examples). Pulsar's processing steps are tightly integrated CUDA kernels and do not work with custom `rasterizer` and `compositor` components. We provide two ways to use Pulsar: (1) there is a unified interface to match the PyTorch3D calling convention seamlessly. This is, for example, illustrated in the [point cloud tutorial](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/render_colored_points.ipynb). (2) There is a direct interface available to the pulsar backend, which exposes the full functionality of the backend (including opacity, which is not yet available in PyTorch3D). Examples showing its use as well as the matching PyTorch3D interface code are available in [this folder](https://github.com/facebookresearch/pytorch3d/tree/master/docs/examples).
--- ---
@@ -84,7 +84,7 @@ For mesh texturing we offer several options (in `pytorch3d/renderer/mesh/texturi
1. **Vertex Textures**: D dimensional textures for each vertex (for example an RGB color) which can be interpolated across the face. This can be represented as an `(N, V, D)` tensor. This is a fairly simple representation though and cannot model complex textures if the mesh faces are large. 1. **Vertex Textures**: D dimensional textures for each vertex (for example an RGB color) which can be interpolated across the face. This can be represented as an `(N, V, D)` tensor. This is a fairly simple representation though and cannot model complex textures if the mesh faces are large.
2. **UV Textures**: vertex UV coordinates and **one** texture map for the whole mesh. For a point on a face with given barycentric coordinates, the face color can be computed by interpolating the vertex uv coordinates and then sampling from the texture map. This representation requires two tensors (UVs: `(N, V, 2), Texture map: `(N, H, W, 3)`), and is limited to only support one texture map per mesh. 2. **UV Textures**: vertex UV coordinates and **one** texture map for the whole mesh. For a point on a face with given barycentric coordinates, the face color can be computed by interpolating the vertex uv coordinates and then sampling from the texture map. This representation requires two tensors (UVs: `(N, V, 2), Texture map: `(N, H, W, 3)`), and is limited to only support one texture map per mesh.
3. **Face Textures**: In more complex cases such as ShapeNet meshes, there are multiple texture maps per mesh and some faces have texture while other do not. For these cases, a more flexible representation is a texture atlas, where each face is represented as an `(RxR)` texture map where R is the texture resolution. For a given point on the face, the texture value can be sampled from the per face texture map using the barycentric coordinates of the point. This representation requires one tensor of shape `(N, F, R, R, 3)`. This texturing method is inspired by the SoftRasterizer implementation. For more details refer to the [`make_material_atlas`](https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/io/mtl_io.py#L123) and [`sample_textures`](https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/renderer/mesh/textures.py#L452) functions. **NOTE:**: The `TexturesAtlas` texture sampling is only differentiable with respect to the texture atlas but not differentiable with respect to the barycentric coordinates. 3. **Face Textures**: In more complex cases such as ShapeNet meshes, there are multiple texture maps per mesh and some faces have texture while other do not. For these cases, a more flexible representation is a texture atlas, where each face is represented as an `(RxR)` texture map where R is the texture resolution. For a given point on the face, the texture value can be sampled from the per face texture map using the barycentric coordinates of the point. This representation requires one tensor of shape `(N, F, R, R, 3)`. This texturing method is inspired by the SoftRasterizer implementation. For more details refer to the [`make_material_atlas`](https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/io/mtl_io.py#L123) and [`sample_textures`](https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/renderer/mesh/textures.py#L452) functions. **NOTE:**: The `TexturesAtlas` texture sampling is only differentiable with respect to the texture atlas but not differentiable with respect to the barycentric coordinates.
<img src="assets/texturing.jpg" width="1000"> <img src="assets/texturing.jpg" width="1000">

View File

@@ -5,12 +5,12 @@ sidebar_label: Plotly Visualization
# Overview # Overview
PyTorch3D provides a modular differentiable renderer, but for instances where we want interactive plots or are not concerned with the differentiability of the rendering process, we provide [functions to render meshes and pointclouds in plotly](../../pytorch3d/vis/plotly_vis.py). These plotly figures allow you to rotate and zoom the rendered images and support plotting batched data as multiple traces in a singular plot or divided into individual subplots. PyTorch3D provides a modular differentiable renderer, but for instances where we want interactive plots or are not concerned with the differentiability of the rendering process, we provide [functions to render meshes and pointclouds in plotly](https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/vis/plotly_vis.py). These plotly figures allow you to rotate and zoom the rendered images and support plotting batched data as multiple traces in a singular plot or divided into individual subplots.
# Examples # Examples
These rendering functions accept plotly x,y, and z axis arguments as `kwargs`, allowing us to customize the plots. Here are two plots with colored axes, a [Pointclouds plot](assets/plotly_pointclouds.png), a [batched Meshes plot in subplots](assets/plotly_meshes_batch.png), and a [batched Meshes plot with multiple traces](assets/plotly_meshes_trace.png). Refer to the [render textured meshes](../tutorials/render_textured_meshes.ipynb) and [render colored pointclouds](../tutorials/render_colored_points) tutorials for code examples. These rendering functions accept plotly x,y, and z axis arguments as `kwargs`, allowing us to customize the plots. Here are two plots with colored axes, a [Pointclouds plot](assets/plotly_pointclouds.png), a [batched Meshes plot in subplots](assets/plotly_meshes_batch.png), and a [batched Meshes plot with multiple traces](assets/plotly_meshes_trace.png). Refer to the [render textured meshes](https://pytorch3d.org/tutorials/render_textured_meshes) and [render colored pointclouds](https://pytorch3d.org/tutorials/render_colored_points) tutorials for code examples.
# Saving plots to images # Saving plots to images

View File

@@ -10,7 +10,7 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved." "# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved."
] ]
}, },
{ {
@@ -36,10 +36,10 @@
"where $d(g_i, g_j)$ is a suitable metric that compares the extrinsics of cameras $g_i$ and $g_j$. \n", "where $d(g_i, g_j)$ is a suitable metric that compares the extrinsics of cameras $g_i$ and $g_j$. \n",
"\n", "\n",
"Visually, the problem can be described as follows. The picture below depicts the situation at the beginning of our optimization. The ground truth cameras are plotted in purple while the randomly initialized estimated cameras are plotted in orange:\n", "Visually, the problem can be described as follows. The picture below depicts the situation at the beginning of our optimization. The ground truth cameras are plotted in purple while the randomly initialized estimated cameras are plotted in orange:\n",
"![Initialization](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/data/bundle_adjustment_initialization.png?raw=1)\n", "![Initialization](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/data/bundle_adjustment_initialization.png?raw=1)\n",
"\n", "\n",
"Our optimization seeks to align the estimated (orange) cameras with the ground truth (purple) cameras, by minimizing the discrepancies between pairs of relative cameras. Thus, the solution to the problem should look as follows:\n", "Our optimization seeks to align the estimated (orange) cameras with the ground truth (purple) cameras, by minimizing the discrepancies between pairs of relative cameras. Thus, the solution to the problem should look as follows:\n",
"![Solution](https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/data/bundle_adjustment_final.png?raw=1)\n", "![Solution](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/data/bundle_adjustment_final.png?raw=1)\n",
"\n", "\n",
"In practice, the camera extrinsics $g_{ij}$ and $g_i$ are represented using objects from the `SfMPerspectiveCameras` class initialized with the corresponding rotation and translation matrices `R_absolute` and `T_absolute` that define the extrinsic parameters $g = (R, T); R \\in SO(3); T \\in \\mathbb{R}^3$. In order to ensure that `R_absolute` is a valid rotation matrix, we represent it using an exponential map (implemented with `so3_exp_map`) of the axis-angle representation of the rotation `log_R_absolute`.\n", "In practice, the camera extrinsics $g_{ij}$ and $g_i$ are represented using objects from the `SfMPerspectiveCameras` class initialized with the corresponding rotation and translation matrices `R_absolute` and `T_absolute` that define the extrinsic parameters $g = (R, T); R \\in SO(3); T \\in \\mathbb{R}^3$. In order to ensure that `R_absolute` is a valid rotation matrix, we represent it using an exponential map (implemented with `so3_exp_map`) of the axis-angle representation of the rotation `log_R_absolute`.\n",
"\n", "\n",
@@ -89,14 +89,16 @@
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"1.9\") and sys.platform.startswith(\"linux\"):\n", " if torch.__version__.startswith(\"1.11.\") and sys.platform.startswith(\"linux\"):\n",
" # We try to install PyTorch3D via a released wheel.\n", " # We try to install PyTorch3D via a released wheel.\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" version_str=\"\".join([\n", " version_str=\"\".join([\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{torch.__version__[0:5:2]}\"\n", " f\"_pyt{pyt_version_str}\"\n",
" ])\n", " ])\n",
" !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install fvcore iopath\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " else:\n",
" # We try to install PyTorch3D from source.\n", " # We try to install PyTorch3D from source.\n",
" !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n", " !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n",
@@ -167,11 +169,11 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/docs/tutorials/utils/camera_visualization.py\n", "!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/utils/camera_visualization.py\n",
"from camera_visualization import plot_camera_scene\n", "from camera_visualization import plot_camera_scene\n",
"\n", "\n",
"!mkdir data\n", "!mkdir data\n",
"!wget -P data https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/docs/tutorials/data/camera_graph.pth" "!wget -P data https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/data/camera_graph.pth"
] ]
}, },
{ {

View File

@@ -10,7 +10,7 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved." "# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved."
] ]
}, },
{ {
@@ -76,14 +76,16 @@
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"1.9\") and sys.platform.startswith(\"linux\"):\n", " if torch.__version__.startswith(\"1.11.\") and sys.platform.startswith(\"linux\"):\n",
" # We try to install PyTorch3D via a released wheel.\n", " # We try to install PyTorch3D via a released wheel.\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" version_str=\"\".join([\n", " version_str=\"\".join([\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{torch.__version__[0:5:2]}\"\n", " f\"_pyt{pyt_version_str}\"\n",
" ])\n", " ])\n",
" !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install fvcore iopath\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " else:\n",
" # We try to install PyTorch3D from source.\n", " # We try to install PyTorch3D from source.\n",
" !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n", " !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n",

View File

@@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved." "# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved."
] ]
}, },
{ {
@@ -51,14 +51,16 @@
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"1.9\") and sys.platform.startswith(\"linux\"):\n", " if torch.__version__.startswith(\"1.11.\") and sys.platform.startswith(\"linux\"):\n",
" # We try to install PyTorch3D via a released wheel.\n", " # We try to install PyTorch3D via a released wheel.\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" version_str=\"\".join([\n", " version_str=\"\".join([\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{torch.__version__[0:5:2]}\"\n", " f\"_pyt{pyt_version_str}\"\n",
" ])\n", " ])\n",
" !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install fvcore iopath\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " else:\n",
" # We try to install PyTorch3D from source.\n", " # We try to install PyTorch3D from source.\n",
" !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n", " !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n",
@@ -112,7 +114,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/docs/tutorials/utils/plot_image_grid.py\n", "!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/utils/plot_image_grid.py\n",
"from plot_image_grid import image_grid" "from plot_image_grid import image_grid"
] ]
}, },

View File

@@ -10,7 +10,7 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved." "# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved."
] ]
}, },
{ {
@@ -90,14 +90,16 @@
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"1.9\") and sys.platform.startswith(\"linux\"):\n", " if torch.__version__.startswith(\"1.11.\") and sys.platform.startswith(\"linux\"):\n",
" # We try to install PyTorch3D via a released wheel.\n", " # We try to install PyTorch3D via a released wheel.\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" version_str=\"\".join([\n", " version_str=\"\".join([\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{torch.__version__[0:5:2]}\"\n", " f\"_pyt{pyt_version_str}\"\n",
" ])\n", " ])\n",
" !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install fvcore iopath\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " else:\n",
" # We try to install PyTorch3D from source.\n", " # We try to install PyTorch3D from source.\n",
" !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n", " !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n",

View File

@@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved." "# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved."
] ]
}, },
{ {
@@ -56,14 +56,16 @@
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"1.9\") and sys.platform.startswith(\"linux\"):\n", " if torch.__version__.startswith(\"1.11.\") and sys.platform.startswith(\"linux\"):\n",
" # We try to install PyTorch3D via a released wheel.\n", " # We try to install PyTorch3D via a released wheel.\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" version_str=\"\".join([\n", " version_str=\"\".join([\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{torch.__version__[0:5:2]}\"\n", " f\"_pyt{pyt_version_str}\"\n",
" ])\n", " ])\n",
" !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install fvcore iopath\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " else:\n",
" # We try to install PyTorch3D from source.\n", " # We try to install PyTorch3D from source.\n",
" !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n", " !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n",
@@ -98,7 +100,7 @@
"from pytorch3d.transforms import so3_exp_map\n", "from pytorch3d.transforms import so3_exp_map\n",
"from pytorch3d.renderer import (\n", "from pytorch3d.renderer import (\n",
" FoVPerspectiveCameras, \n", " FoVPerspectiveCameras, \n",
" NDCGridRaysampler,\n", " NDCMultinomialRaysampler,\n",
" MonteCarloRaysampler,\n", " MonteCarloRaysampler,\n",
" EmissionAbsorptionRaymarcher,\n", " EmissionAbsorptionRaymarcher,\n",
" ImplicitRenderer,\n", " ImplicitRenderer,\n",
@@ -126,8 +128,8 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/docs/tutorials/utils/plot_image_grid.py\n", "!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/utils/plot_image_grid.py\n",
"!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/docs/tutorials/utils/generate_cow_renders.py\n", "!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/utils/generate_cow_renders.py\n",
"from plot_image_grid import image_grid\n", "from plot_image_grid import image_grid\n",
"from generate_cow_renders import generate_cow_renders" "from generate_cow_renders import generate_cow_renders"
] ]
@@ -184,7 +186,7 @@
"The renderer is composed of a *raymarcher* and a *raysampler*.\n", "The renderer is composed of a *raymarcher* and a *raysampler*.\n",
"- The *raysampler* is responsible for emitting rays from image pixels and sampling the points along them. Here, we use two different raysamplers:\n", "- The *raysampler* is responsible for emitting rays from image pixels and sampling the points along them. Here, we use two different raysamplers:\n",
" - `MonteCarloRaysampler` is used to generate rays from a random subset of pixels of the image plane. The random subsampling of pixels is carried out during **training** to decrease the memory consumption of the implicit model.\n", " - `MonteCarloRaysampler` is used to generate rays from a random subset of pixels of the image plane. The random subsampling of pixels is carried out during **training** to decrease the memory consumption of the implicit model.\n",
" - `NDCGridRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user). In combination with the implicit model of the scene, `NDCGridRaysampler` consumes a large amount of memory and, hence, is only used for visualizing the results of the training at **test** time.\n", " - `NDCMultinomialRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user). In combination with the implicit model of the scene, `NDCMultinomialRaysampler` consumes a large amount of memory and, hence, is only used for visualizing the results of the training at **test** time.\n",
"- The *raymarcher* takes the densities and colors sampled along each ray and renders each ray into a color and an opacity value of the ray's source pixel. Here we use the `EmissionAbsorptionRaymarcher` which implements the standard Emission-Absorption raymarching algorithm." "- The *raymarcher* takes the densities and colors sampled along each ray and renders each ray into a color and an opacity value of the ray's source pixel. Here we use the `EmissionAbsorptionRaymarcher` which implements the standard Emission-Absorption raymarching algorithm."
] ]
}, },
@@ -209,10 +211,10 @@
"\n", "\n",
"# 1) Instantiate the raysamplers.\n", "# 1) Instantiate the raysamplers.\n",
"\n", "\n",
"# Here, NDCGridRaysampler generates a rectangular image\n", "# Here, NDCMultinomialRaysampler generates a rectangular image\n",
"# grid of rays whose coordinates follow the PyTorch3D\n", "# grid of rays whose coordinates follow the PyTorch3D\n",
"# coordinate conventions.\n", "# coordinate conventions.\n",
"raysampler_grid = NDCGridRaysampler(\n", "raysampler_grid = NDCMultinomialRaysampler(\n",
" image_height=render_size,\n", " image_height=render_size,\n",
" image_width=render_size,\n", " image_width=render_size,\n",
" n_pts_per_ray=128,\n", " n_pts_per_ray=128,\n",
@@ -813,7 +815,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## 5. Visualizing the optimized neural radiance field\n", "## 6. Visualizing the optimized neural radiance field\n",
"\n", "\n",
"Finally, we visualize the neural radiance field by rendering from multiple viewpoints that rotate around the volume's y-axis." "Finally, we visualize the neural radiance field by rendering from multiple viewpoints that rotate around the volume's y-axis."
] ]
@@ -842,7 +844,7 @@
" fov=target_cameras.fov[0],\n", " fov=target_cameras.fov[0],\n",
" device=device,\n", " device=device,\n",
" )\n", " )\n",
" # Note that we again render with `NDCGridRaySampler`\n", " # Note that we again render with `NDCMultinomialRaysampler`\n",
" # and the batched_forward function of neural_radiance_field.\n", " # and the batched_forward function of neural_radiance_field.\n",
" frames.append(\n", " frames.append(\n",
" renderer_grid(\n", " renderer_grid(\n",
@@ -863,9 +865,9 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## 6. Conclusion\n", "## 7. Conclusion\n",
"\n", "\n",
"In this tutorial, we have shown how to optimize an implicit representation of a scene such that the renders of the scene from known viewpoints match the observed images for each viewpoint. The rendering was carried out using the PyTorch3D's implicit function renderer composed of either a `MonteCarloRaysampler` or `NDCGridRaysampler`, and an `EmissionAbsorptionRaymarcher`." "In this tutorial, we have shown how to optimize an implicit representation of a scene such that the renders of the scene from known viewpoints match the observed images for each viewpoint. The rendering was carried out using the PyTorch3D's implicit function renderer composed of either a `MonteCarloRaysampler` or `NDCMultinomialRaysampler`, and an `EmissionAbsorptionRaymarcher`."
] ]
} }
], ],

View File

@@ -10,7 +10,7 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved." "# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved."
] ]
}, },
{ {
@@ -68,14 +68,16 @@
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"1.9\") and sys.platform.startswith(\"linux\"):\n", " if torch.__version__.startswith(\"1.11.\") and sys.platform.startswith(\"linux\"):\n",
" # We try to install PyTorch3D via a released wheel.\n", " # We try to install PyTorch3D via a released wheel.\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" version_str=\"\".join([\n", " version_str=\"\".join([\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{torch.__version__[0:5:2]}\"\n", " f\"_pyt{pyt_version_str}\"\n",
" ])\n", " ])\n",
" !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install fvcore iopath\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " else:\n",
" # We try to install PyTorch3D from source.\n", " # We try to install PyTorch3D from source.\n",
" !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n", " !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n",
@@ -116,7 +118,7 @@
"from pytorch3d.structures import Meshes\n", "from pytorch3d.structures import Meshes\n",
"from pytorch3d.renderer import (\n", "from pytorch3d.renderer import (\n",
" look_at_view_transform,\n", " look_at_view_transform,\n",
" OpenGLPerspectiveCameras, \n", " FoVPerspectiveCameras, \n",
" PointLights, \n", " PointLights, \n",
" DirectionalLights, \n", " DirectionalLights, \n",
" Materials, \n", " Materials, \n",
@@ -155,7 +157,7 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/docs/tutorials/utils/plot_image_grid.py\n", "!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/utils/plot_image_grid.py\n",
"from plot_image_grid import image_grid" "from plot_image_grid import image_grid"
] ]
}, },
@@ -302,11 +304,11 @@
"# broadcasting. So we can view the camera from the a distance of dist=2.7, and \n", "# broadcasting. So we can view the camera from the a distance of dist=2.7, and \n",
"# then specify elevation and azimuth angles for each viewpoint as tensors. \n", "# then specify elevation and azimuth angles for each viewpoint as tensors. \n",
"R, T = look_at_view_transform(dist=2.7, elev=elev, azim=azim)\n", "R, T = look_at_view_transform(dist=2.7, elev=elev, azim=azim)\n",
"cameras = OpenGLPerspectiveCameras(device=device, R=R, T=T)\n", "cameras = FoVPerspectiveCameras(device=device, R=R, T=T)\n",
"\n", "\n",
"# We arbitrarily choose one particular view that will be used to visualize \n", "# We arbitrarily choose one particular view that will be used to visualize \n",
"# results\n", "# results\n",
"camera = OpenGLPerspectiveCameras(device=device, R=R[None, 1, ...], \n", "camera = FoVPerspectiveCameras(device=device, R=R[None, 1, ...], \n",
" T=T[None, 1, ...]) \n", " T=T[None, 1, ...]) \n",
"\n", "\n",
"# Define the settings for rasterization and shading. Here we set the output \n", "# Define the settings for rasterization and shading. Here we set the output \n",
@@ -349,7 +351,7 @@
"# Our multi-view cow dataset will be represented by these 2 lists of tensors,\n", "# Our multi-view cow dataset will be represented by these 2 lists of tensors,\n",
"# each of length num_views.\n", "# each of length num_views.\n",
"target_rgb = [target_images[i, ..., :3] for i in range(num_views)]\n", "target_rgb = [target_images[i, ..., :3] for i in range(num_views)]\n",
"target_cameras = [OpenGLPerspectiveCameras(device=device, R=R[None, i, ...], \n", "target_cameras = [FoVPerspectiveCameras(device=device, R=R[None, i, ...], \n",
" T=T[None, i, ...]) for i in range(num_views)]" " T=T[None, i, ...]) for i in range(num_views)]"
] ]
}, },
@@ -706,6 +708,7 @@
" image_size=128, \n", " image_size=128, \n",
" blur_radius=np.log(1. / 1e-4 - 1.)*sigma, \n", " blur_radius=np.log(1. / 1e-4 - 1.)*sigma, \n",
" faces_per_pixel=50, \n", " faces_per_pixel=50, \n",
" perspective_correct=False, \n",
")\n", ")\n",
"\n", "\n",
"# Differentiable soft renderer using per vertex RGB colors for texture\n", "# Differentiable soft renderer using per vertex RGB colors for texture\n",

View File

@@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved." "# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved."
] ]
}, },
{ {
@@ -47,14 +47,16 @@
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"1.9\") and sys.platform.startswith(\"linux\"):\n", " if torch.__version__.startswith(\"1.11.\") and sys.platform.startswith(\"linux\"):\n",
" # We try to install PyTorch3D via a released wheel.\n", " # We try to install PyTorch3D via a released wheel.\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" version_str=\"\".join([\n", " version_str=\"\".join([\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{torch.__version__[0:5:2]}\"\n", " f\"_pyt{pyt_version_str}\"\n",
" ])\n", " ])\n",
" !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install fvcore iopath\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " else:\n",
" # We try to install PyTorch3D from source.\n", " # We try to install PyTorch3D from source.\n",
" !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n", " !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n",
@@ -87,7 +89,7 @@
"from pytorch3d.renderer import (\n", "from pytorch3d.renderer import (\n",
" FoVPerspectiveCameras, \n", " FoVPerspectiveCameras, \n",
" VolumeRenderer,\n", " VolumeRenderer,\n",
" NDCGridRaysampler,\n", " NDCMultinomialRaysampler,\n",
" EmissionAbsorptionRaymarcher\n", " EmissionAbsorptionRaymarcher\n",
")\n", ")\n",
"from pytorch3d.transforms import so3_exp_map\n", "from pytorch3d.transforms import so3_exp_map\n",
@@ -106,8 +108,8 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/docs/tutorials/utils/plot_image_grid.py\n", "!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/utils/plot_image_grid.py\n",
"!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/docs/tutorials/utils/generate_cow_renders.py\n", "!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/utils/generate_cow_renders.py\n",
"from plot_image_grid import image_grid\n", "from plot_image_grid import image_grid\n",
"from generate_cow_renders import generate_cow_renders" "from generate_cow_renders import generate_cow_renders"
] ]
@@ -162,7 +164,7 @@
"The following initializes a volumetric renderer that emits a ray from each pixel of a target image and samples a set of uniformly-spaced points along the ray. At each ray-point, the corresponding density and color value is obtained by querying the corresponding location in the volumetric model of the scene (the model is described & instantiated in a later cell).\n", "The following initializes a volumetric renderer that emits a ray from each pixel of a target image and samples a set of uniformly-spaced points along the ray. At each ray-point, the corresponding density and color value is obtained by querying the corresponding location in the volumetric model of the scene (the model is described & instantiated in a later cell).\n",
"\n", "\n",
"The renderer is composed of a *raymarcher* and a *raysampler*.\n", "The renderer is composed of a *raymarcher* and a *raysampler*.\n",
"- The *raysampler* is responsible for emitting rays from image pixels and sampling the points along them. Here, we use the `NDCGridRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user).\n", "- The *raysampler* is responsible for emitting rays from image pixels and sampling the points along them. Here, we use the `NDCMultinomialRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user).\n",
"- The *raymarcher* takes the densities and colors sampled along each ray and renders each ray into a color and an opacity value of the ray's source pixel. Here we use the `EmissionAbsorptionRaymarcher` which implements the standard Emission-Absorption raymarching algorithm." "- The *raymarcher* takes the densities and colors sampled along each ray and renders each ray into a color and an opacity value of the ray's source pixel. Here we use the `EmissionAbsorptionRaymarcher` which implements the standard Emission-Absorption raymarching algorithm."
] ]
}, },
@@ -184,14 +186,14 @@
"volume_extent_world = 3.0\n", "volume_extent_world = 3.0\n",
"\n", "\n",
"# 1) Instantiate the raysampler.\n", "# 1) Instantiate the raysampler.\n",
"# Here, NDCGridRaysampler generates a rectangular image\n", "# Here, NDCMultinomialRaysampler generates a rectangular image\n",
"# grid of rays whose coordinates follow the PyTorch3D\n", "# grid of rays whose coordinates follow the PyTorch3D\n",
"# coordinate conventions.\n", "# coordinate conventions.\n",
"# Since we use a volume of size 128^3, we sample n_pts_per_ray=150,\n", "# Since we use a volume of size 128^3, we sample n_pts_per_ray=150,\n",
"# which roughly corresponds to a one ray-point per voxel.\n", "# which roughly corresponds to a one ray-point per voxel.\n",
"# We further set the min_depth=0.1 since there is no surface within\n", "# We further set the min_depth=0.1 since there is no surface within\n",
"# 0.1 units of any camera plane.\n", "# 0.1 units of any camera plane.\n",
"raysampler = NDCGridRaysampler(\n", "raysampler = NDCMultinomialRaysampler(\n",
" image_width=render_size,\n", " image_width=render_size,\n",
" image_height=render_size,\n", " image_height=render_size,\n",
" n_pts_per_ray=150,\n", " n_pts_per_ray=150,\n",
@@ -460,7 +462,7 @@
"source": [ "source": [
"## 6. Conclusion\n", "## 6. Conclusion\n",
"\n", "\n",
"In this tutorial, we have shown how to optimize a 3D volumetric representation of a scene such that the renders of the volume from known viewpoints match the observed images for each viewpoint. The rendering was carried out using the PyTorch3D's volumetric renderer composed of an `NDCGridRaysampler` and an `EmissionAbsorptionRaymarcher`." "In this tutorial, we have shown how to optimize a 3D volumetric representation of a scene such that the renders of the volume from known viewpoints match the observed images for each viewpoint. The rendering was carried out using the PyTorch3D's volumetric renderer composed of an `NDCMultinomialRaysampler` and an `EmissionAbsorptionRaymarcher`."
] ]
} }
], ],

View File

@@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved." "# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved."
] ]
}, },
{ {
@@ -50,14 +50,16 @@
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"1.9\") and sys.platform.startswith(\"linux\"):\n", " if torch.__version__.startswith(\"1.11.\") and sys.platform.startswith(\"linux\"):\n",
" # We try to install PyTorch3D via a released wheel.\n", " # We try to install PyTorch3D via a released wheel.\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" version_str=\"\".join([\n", " version_str=\"\".join([\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{torch.__version__[0:5:2]}\"\n", " f\"_pyt{pyt_version_str}\"\n",
" ])\n", " ])\n",
" !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install fvcore iopath\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " else:\n",
" # We try to install PyTorch3D from source.\n", " # We try to install PyTorch3D from source.\n",
" !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n", " !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n",

View File

@@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved." "# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved."
] ]
}, },
{ {
@@ -57,14 +57,16 @@
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"1.9\") and sys.platform.startswith(\"linux\"):\n", " if torch.__version__.startswith(\"1.11.\") and sys.platform.startswith(\"linux\"):\n",
" # We try to install PyTorch3D via a released wheel.\n", " # We try to install PyTorch3D via a released wheel.\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" version_str=\"\".join([\n", " version_str=\"\".join([\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{torch.__version__[0:5:2]}\"\n", " f\"_pyt{pyt_version_str}\"\n",
" ])\n", " ])\n",
" !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install fvcore iopath\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " else:\n",
" # We try to install PyTorch3D from source.\n", " # We try to install PyTorch3D from source.\n",
" !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n", " !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n",

View File

@@ -10,7 +10,7 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved." "# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved."
] ]
}, },
{ {
@@ -73,14 +73,16 @@
"except ModuleNotFoundError:\n", "except ModuleNotFoundError:\n",
" need_pytorch3d=True\n", " need_pytorch3d=True\n",
"if need_pytorch3d:\n", "if need_pytorch3d:\n",
" if torch.__version__.startswith(\"1.9\") and sys.platform.startswith(\"linux\"):\n", " if torch.__version__.startswith(\"1.11.\") and sys.platform.startswith(\"linux\"):\n",
" # We try to install PyTorch3D via a released wheel.\n", " # We try to install PyTorch3D via a released wheel.\n",
" pyt_version_str=torch.__version__.split(\"+\")[0].replace(\".\", \"\")\n",
" version_str=\"\".join([\n", " version_str=\"\".join([\n",
" f\"py3{sys.version_info.minor}_cu\",\n", " f\"py3{sys.version_info.minor}_cu\",\n",
" torch.version.cuda.replace(\".\",\"\"),\n", " torch.version.cuda.replace(\".\",\"\"),\n",
" f\"_pyt{torch.__version__[0:5:2]}\"\n", " f\"_pyt{pyt_version_str}\"\n",
" ])\n", " ])\n",
" !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n", " !pip install fvcore iopath\n",
" !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n",
" else:\n", " else:\n",
" # We try to install PyTorch3D from source.\n", " # We try to install PyTorch3D from source.\n",
" !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n", " !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n",
@@ -154,7 +156,7 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/master/docs/tutorials/utils/plot_image_grid.py\n", "!wget https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/docs/tutorials/utils/plot_image_grid.py\n",
"from plot_image_grid import image_grid" "from plot_image_grid import image_grid"
] ]
}, },

View File

@@ -1,4 +1,4 @@
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,4 +1,4 @@
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,4 +1,4 @@
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
@@ -12,13 +12,13 @@ from pytorch3d.io import load_objs_as_meshes
from pytorch3d.renderer import ( from pytorch3d.renderer import (
BlendParams, BlendParams,
FoVPerspectiveCameras, FoVPerspectiveCameras,
look_at_view_transform,
MeshRasterizer, MeshRasterizer,
MeshRenderer, MeshRenderer,
PointLights, PointLights,
RasterizationSettings, RasterizationSettings,
SoftPhongShader, SoftPhongShader,
SoftSilhouetteShader, SoftSilhouetteShader,
look_at_view_transform,
) )

View File

@@ -1,4 +1,4 @@
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,5 +1,5 @@
#!/bin/bash #!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,5 +1,5 @@
#!/bin/bash #!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,8 +1,7 @@
@REM Copyright (c) Facebook, Inc. and its affiliates. @REM Copyright (c) Meta Platforms, Inc. and affiliates.
@REM All rights reserved. @REM All rights reserved.
@REM @REM
@REM This source code is licensed under the BSD-style license found in the @REM This source code is licensed under the BSD-style license found in the
@REM LICENSE file in the root directory of this source tree. @REM LICENSE file in the root directory of this source tree.
:: Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
start /wait "" "%miniconda_exe%" /S /InstallationType=JustMe /RegisterPython=0 /AddToPath=0 /D=%tmp_conda% start /wait "" "%miniconda_exe%" /S /InstallationType=JustMe /RegisterPython=0 /AddToPath=0 /D=%tmp_conda%

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -20,10 +20,11 @@ commands.
``` ```
import sys import sys
import torch import torch
pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
version_str="".join([ version_str="".join([
f"py3{sys.version_info.minor}_cu", f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""), torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}" f"_pyt{pyt_version_str}"
]) ])
!pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
``` ```

View File

@@ -1,5 +1,5 @@
#!/usr/bin/bash #!/usr/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the

View File

@@ -1,8 +1,11 @@
#!/usr/bin/bash #!/usr/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
sudo docker run --rm -v "$PWD/../../:/inside" pytorch/conda-cuda bash inside/packaging/linux_wheels/inside.sh sudo docker run --rm -v "$PWD/../../:/inside" pytorch/conda-cuda bash inside/packaging/linux_wheels/inside.sh
sudo docker run --rm -v "$PWD/../../:/inside" -e SELECTED_CUDA=cu113 pytorch/conda-builder:cuda113 bash inside/packaging/linux_wheels/inside.sh
sudo docker run --rm -v "$PWD/../../:/inside" -e SELECTED_CUDA=cu115 pytorch/conda-builder:cuda115 bash inside/packaging/linux_wheels/inside.sh
sudo docker run --rm -v "$PWD/../../:/inside" -e SELECTED_CUDA=cu116 pytorch/conda-builder:cuda116 bash inside/packaging/linux_wheels/inside.sh

View File

@@ -1,5 +1,5 @@
#!/bin/bash #!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
@@ -26,18 +26,13 @@ echo "CUB_HOME is now $CUB_HOME"
# As a rule, we want to build for any combination of dependencies which is supported by # As a rule, we want to build for any combination of dependencies which is supported by
# PyTorch3D and not older than the current Google Colab set up. # PyTorch3D and not older than the current Google Colab set up.
PYTHON_VERSIONS="3.7 3.8 3.9" PYTHON_VERSIONS="3.7 3.8 3.9 3.10"
# the keys are pytorch versions # the keys are pytorch versions
declare -A CONDA_CUDA_VERSIONS=( declare -A CONDA_CUDA_VERSIONS=(
# ["1.4.0"]="cu101" ["1.10.1"]="cu111 cu113"
# ["1.5.0"]="cu101 cu102" ["1.10.2"]="cu111 cu113"
# ["1.5.1"]="cu101 cu102" ["1.10.0"]="cu111 cu113"
# ["1.6.0"]="cu101 cu102" ["1.11.0"]="cu111 cu113 cu115"
# ["1.7.0"]="cu101 cu102 cu110"
# ["1.7.1"]="cu101 cu102 cu110"
# ["1.8.0"]="cu101 cu102 cu111"
# ["1.8.1"]="cu101 cu102 cu111"
["1.9.0"]="cu102 cu111"
) )
@@ -46,22 +41,59 @@ for python_version in $PYTHON_VERSIONS
do do
for pytorch_version in "${!CONDA_CUDA_VERSIONS[@]}" for pytorch_version in "${!CONDA_CUDA_VERSIONS[@]}"
do do
if [[ "3.6 3.7 3.8" != *$python_version* ]] && [[ "1.4.0 1.5.0 1.5.1 1.6.0 1.7.0" == *$pytorch_version* ]] if [[ "3.7 3.8" != *$python_version* ]] && [[ "1.7.0" == *$pytorch_version* ]]
then then
#python 3.9 and later not supported by pytorch 1.7.0 and before #python 3.9 and later not supported by pytorch 1.7.0 and before
continue continue
fi fi
if [[ "3.7 3.8 3.9" != *$python_version* ]] && [[ "1.7.0 1.7.1 1.8.0 1.8.1 1.9.0 1.9.1 1.10.0 1.10.1 1.10.2" == *$pytorch_version* ]]
if [[ "3.9" == "$python_version" ]] then
#python 3.10 and later not supported by pytorch 1.10.2 and before
continue
fi
extra_channel="-c conda-forge"
if [[ "1.11.0" == "$pytorch_version" ]]
then then
extra_channel="-c conda-forge"
else
extra_channel="" extra_channel=""
fi fi
for cu_version in ${CONDA_CUDA_VERSIONS[$pytorch_version]} for cu_version in ${CONDA_CUDA_VERSIONS[$pytorch_version]}
do do
if [[ "cu113 cu115 cu116" == *$cu_version* ]]
# ^^^ CUDA versions listed here have to be built
# in their own containers.
then
if [[ $SELECTED_CUDA != "$cu_version" ]]
then
continue
fi
elif [[ $SELECTED_CUDA != "" ]]
then
continue
fi
case "$cu_version" in case "$cu_version" in
cu116)
export CUDA_HOME=/usr/local/cuda-11.6/
export CUDA_TAG=11.6
export NVCC_FLAGS="-gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_50,code=compute_50"
;;
cu115)
export CUDA_HOME=/usr/local/cuda-11.5/
export CUDA_TAG=11.5
export NVCC_FLAGS="-gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_50,code=compute_50"
;;
cu113)
export CUDA_HOME=/usr/local/cuda-11.3/
export CUDA_TAG=11.3
export NVCC_FLAGS="-gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_50,code=compute_50"
;;
cu112)
export CUDA_HOME=/usr/local/cuda-11.2/
export CUDA_TAG=11.2
export NVCC_FLAGS="-gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_50,code=compute_50"
;;
cu111) cu111)
export CUDA_HOME=/usr/local/cuda-11.1/ export CUDA_HOME=/usr/local/cuda-11.1/
export CUDA_TAG=11.1 export CUDA_TAG=11.1
@@ -97,6 +129,7 @@ do
conda create -y -n "$tag" "python=$python_version" conda create -y -n "$tag" "python=$python_version"
conda activate "$tag" conda activate "$tag"
# shellcheck disable=SC2086
conda install -y -c pytorch $extra_channel "pytorch=$pytorch_version" "cudatoolkit=$CUDA_TAG" torchvision conda install -y -c pytorch $extra_channel "pytorch=$pytorch_version" "cudatoolkit=$CUDA_TAG" torchvision
pip install fvcore iopath pip install fvcore iopath
echo "python version" "$python_version" "pytorch version" "$pytorch_version" "cuda version" "$cu_version" "tag" "$tag" echo "python version" "$python_version" "pytorch version" "$pytorch_version" "cuda version" "$cu_version" "tag" "$tag"

View File

@@ -1,10 +1,9 @@
# Copyright (c) Facebook, Inc. and its affiliates. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved. # All rights reserved.
# #
# This source code is licensed under the BSD-style license found in the # This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.
import os
import subprocess import subprocess
from pathlib import Path from pathlib import Path
from typing import List from typing import List
@@ -15,13 +14,12 @@ dest = "s3://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/"
output = Path("output") output = Path("output")
def fs3cmd(args, allow_failure: bool = False) -> List[str]: def aws_s3_cmd(args) -> List[str]:
""" """
This function returns the args for subprocess to mimic the bash command This function returns the full args for subprocess to do a command
fs3cmd available in the fairusers_aws module on the FAIR cluster. with aws.
""" """
os.environ["FAIR_CLUSTER_NAME"] = os.environ["FAIR_ENV_CLUSTER"].lower() cmd_args = ["aws", "s3", "--profile", "saml"] + args
cmd_args = ["/public/apps/fairusers_aws/bin/fs3cmd"] + args
return cmd_args return cmd_args
@@ -31,7 +29,7 @@ def fs3_exists(path) -> bool:
In fact, will also return True if there is a file which has the given In fact, will also return True if there is a file which has the given
path as a prefix, but we are careful about this. path as a prefix, but we are careful about this.
""" """
out = subprocess.check_output(fs3cmd(["ls", path])) out = subprocess.check_output(aws_s3_cmd(["ls", path]))
return len(out) != 0 return len(out) != 0
@@ -41,7 +39,7 @@ def get_html_wrappers() -> None:
assert not output_wrapper.exists() assert not output_wrapper.exists()
dest_wrapper = dest + directory.name + "/download.html" dest_wrapper = dest + directory.name + "/download.html"
if fs3_exists(dest_wrapper): if fs3_exists(dest_wrapper):
subprocess.check_call(fs3cmd(["get", dest_wrapper, str(output_wrapper)])) subprocess.check_call(aws_s3_cmd(["cp", dest_wrapper, str(output_wrapper)]))
def write_html_wrappers() -> None: def write_html_wrappers() -> None:
@@ -70,7 +68,7 @@ def to_aws() -> None:
for file in directory.iterdir(): for file in directory.iterdir():
print(file) print(file)
subprocess.check_call( subprocess.check_call(
fs3cmd(["put", str(file), dest + str(file.relative_to(output))]) aws_s3_cmd(["cp", str(file), dest + str(file.relative_to(output))])
) )
@@ -79,3 +77,11 @@ if __name__ == "__main__":
# get_html_wrappers() # get_html_wrappers()
write_html_wrappers() write_html_wrappers()
to_aws() to_aws()
# see all files with
# aws s3 --profile saml ls --recursive s3://dl.fbaipublicfiles.com/pytorch3d/
# empty current with
# aws s3 --profile saml rm --recursive
# s3://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/

View File

@@ -1,9 +1,13 @@
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved. # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
# shellcheck shell=bash # shellcheck shell=bash
# A set of useful bash functions for common functionality we need to do in # A set of useful bash functions for common functionality we need to do in
# many build scripts # many build scripts
# Setup CUDA environment variables, based on CU_VERSION # Setup CUDA environment variables, based on CU_VERSION
# #
# Inputs: # Inputs:
@@ -51,6 +55,50 @@ setup_cuda() {
# Now work out the CUDA settings # Now work out the CUDA settings
case "$CU_VERSION" in case "$CU_VERSION" in
cu116)
if [[ "$OSTYPE" == "msys" ]]; then
export CUDA_HOME="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6"
else
export CUDA_HOME=/usr/local/cuda-11.6/
fi
export FORCE_CUDA=1
# Hard-coding gencode flags is temporary situation until
# https://github.com/pytorch/pytorch/pull/23408 lands
export NVCC_FLAGS="-gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_50,code=compute_50"
;;
cu115)
if [[ "$OSTYPE" == "msys" ]]; then
export CUDA_HOME="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.5"
else
export CUDA_HOME=/usr/local/cuda-11.5/
fi
export FORCE_CUDA=1
# Hard-coding gencode flags is temporary situation until
# https://github.com/pytorch/pytorch/pull/23408 lands
export NVCC_FLAGS="-gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_50,code=compute_50"
;;
cu113)
if [[ "$OSTYPE" == "msys" ]]; then
export CUDA_HOME="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.3"
else
export CUDA_HOME=/usr/local/cuda-11.3/
fi
export FORCE_CUDA=1
# Hard-coding gencode flags is temporary situation until
# https://github.com/pytorch/pytorch/pull/23408 lands
export NVCC_FLAGS="-gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_50,code=compute_50"
;;
cu112)
if [[ "$OSTYPE" == "msys" ]]; then
export CUDA_HOME="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2"
else
export CUDA_HOME=/usr/local/cuda-11.2/
fi
export FORCE_CUDA=1
# Hard-coding gencode flags is temporary situation until
# https://github.com/pytorch/pytorch/pull/23408 lands
export NVCC_FLAGS="-gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_50,code=compute_50"
;;
cu111) cu111)
if [[ "$OSTYPE" == "msys" ]]; then if [[ "$OSTYPE" == "msys" ]]; then
export CUDA_HOME="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.1" export CUDA_HOME="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.1"
@@ -267,9 +315,20 @@ setup_conda_cudatoolkit_constraint() {
export CONDA_CUDATOOLKIT_CONSTRAINT="" export CONDA_CUDATOOLKIT_CONSTRAINT=""
else else
case "$CU_VERSION" in case "$CU_VERSION" in
cu116)
export CONDA_CUDATOOLKIT_CONSTRAINT="- cudatoolkit >=11.6,<11.7 # [not osx]"
;;
cu115)
export CONDA_CUDATOOLKIT_CONSTRAINT="- cudatoolkit >=11.5,<11.6 # [not osx]"
;;
cu113)
export CONDA_CUDATOOLKIT_CONSTRAINT="- cudatoolkit >=11.3,<11.4 # [not osx]"
;;
cu112)
export CONDA_CUDATOOLKIT_CONSTRAINT="- cudatoolkit >=11.2,<11.3 # [not osx]"
;;
cu111) cu111)
export CONDA_CUDATOOLKIT_CONSTRAINT="- cudatoolkit >=11.1,<11.2 # [not osx]" export CONDA_CUDATOOLKIT_CONSTRAINT="- cudatoolkit >=11.1,<11.2 # [not osx]"
#export CONDA_CUB_CONSTRAINT="- nvidiacub"
;; ;;
cu110) cu110)
export CONDA_CUDATOOLKIT_CONSTRAINT="- cudatoolkit >=11.0,<11.1 # [not osx]" export CONDA_CUDATOOLKIT_CONSTRAINT="- cudatoolkit >=11.0,<11.1 # [not osx]"

View File

@@ -45,9 +45,12 @@ test:
- docs - docs
requires: requires:
- imageio - imageio
- hydra-core
- accelerate
- lpips
commands: commands:
#pytest . #pytest .
python -m unittest discover -v -s tests python -m unittest discover -v -s tests -t .
about: about:

View File

@@ -1,4 +1,4 @@
@REM Copyright (c) Facebook, Inc. and its affiliates. @REM Copyright (c) Meta Platforms, Inc. and affiliates.
@REM All rights reserved. @REM All rights reserved.
@REM @REM
@REM This source code is licensed under the BSD-style license found in the @REM This source code is licensed under the BSD-style license found in the

View File

@@ -1,4 +1,4 @@
@REM Copyright (c) Facebook, Inc. and its affiliates. @REM Copyright (c) Meta Platforms, Inc. and affiliates.
@REM All rights reserved. @REM All rights reserved.
@REM @REM
@REM This source code is licensed under the BSD-style license found in the @REM This source code is licensed under the BSD-style license found in the

View File

@@ -1,4 +1,4 @@
@REM Copyright (c) Facebook, Inc. and its affiliates. @REM Copyright (c) Meta Platforms, Inc. and affiliates.
@REM All rights reserved. @REM All rights reserved.
@REM @REM
@REM This source code is licensed under the BSD-style license found in the @REM This source code is licensed under the BSD-style license found in the

View File

@@ -1,4 +1,4 @@
@REM Copyright (c) Facebook, Inc. and its affiliates. @REM Copyright (c) Meta Platforms, Inc. and affiliates.
@REM All rights reserved. @REM All rights reserved.
@REM @REM
@REM This source code is licensed under the BSD-style license found in the @REM This source code is licensed under the BSD-style license found in the

View File

@@ -1,4 +1,4 @@
@REM Copyright (c) Facebook, Inc. and its affiliates. @REM Copyright (c) Meta Platforms, Inc. and affiliates.
@REM All rights reserved. @REM All rights reserved.
@REM @REM
@REM This source code is licensed under the BSD-style license found in the @REM This source code is licensed under the BSD-style license found in the

View File

@@ -1,4 +1,4 @@
@REM Copyright (c) Facebook, Inc. and its affiliates. @REM Copyright (c) Meta Platforms, Inc. and affiliates.
@REM All rights reserved. @REM All rights reserved.
@REM @REM
@REM This source code is licensed under the BSD-style license found in the @REM This source code is licensed under the BSD-style license found in the

5
projects/__init__.py Normal file
View File

@@ -0,0 +1,5 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.

View File

@@ -0,0 +1,280 @@
# Introduction
Implicitron is a PyTorch3D-based framework for new-view synthesis via modeling the neural-network based representations.
# License
Implicitron is distributed as part of PyTorch3D under the [BSD license](https://github.com/facebookresearch/pytorch3d/blob/main/LICENSE).
It includes code from the [NeRF](https://github.com/bmild/nerf), [SRN](http://github.com/vsitzmann/scene-representation-networks) and [IDR](http://github.com/lioryariv/idr) repos.
See [LICENSE-3RD-PARTY](https://github.com/facebookresearch/pytorch3d/blob/main/LICENSE-3RD-PARTY) for their licenses.
# Installation
There are three ways to set up Implicitron, depending on the flexibility level required.
If you only want to train or evaluate models as they are implemented changing only the parameters, you can just install the package.
Implicitron also provides a flexible API that supports user-defined plug-ins;
if you want to re-implement some of the components without changing the high-level pipeline, you need to create a custom launcher script.
The most flexible option, though, is cloning PyTorch3D repo and building it from sources, which allows changing the code in arbitrary ways.
Below, we descibe all three options in more details.
## [Option 1] Running an executable from the package
This option allows you to use the code as is without changing the implementations.
Only configuration can be changed (see [Configuration system](#configuration-system)).
For this setup, install the dependencies and PyTorch3D from conda following [the guide](https://github.com/facebookresearch/pytorch3d/blob/master/INSTALL.md#1-install-with-cuda-support-from-anaconda-cloud-on-linux-only). Then, install implicitron-specific dependencies:
```shell
pip install "hydra-core>=1.1" visdom lpips matplotlib accelerate
```
Runner executable is available as `pytorch3d_implicitron_runner` shell command.
See [Running](#running) section below for examples of training and evaluation commands.
## [Option 2] Supporting custom implementations
To plug in custom implementations, for example, of renderer or implicit-function protocols, you need to create your own runner script and import the plug-in implementations there.
First, install PyTorch3D and Implicitron dependencies as described in the previous section.
Then, implement the custom script; copying `pytorch3d/projects/implicitron_trainer/experiment.py` is a good place to start.
See [Custom plugins](#custom-plugins) for more information on how to import implementations and enable them in the configs.
## [Option 3] Cloning PyTorch3D repo
This is the most flexible way to set up Implicitron as it allows changing the code directly.
It allows modifying the high-level rendering pipeline or implementing yet-unsupported loss functions.
Please follow the instructions to [install PyTorch3D from a local clone](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md#2-install-from-a-local-clone).
Then, install Implicitron-specific dependencies:
```shell
pip install "hydra-core>=1.1" visdom lpips matplotlib accelerate
```
You are still encouraged to implement custom plugins as above where possible as it makes reusing the code easier.
The executable is located in `pytorch3d/projects/implicitron_trainer`.
# Running
This section assumes that you use the executable provided by the installed package.
If you have a custom `experiment.py` script (as in the Option 2 above), replace the executable with the path to your script.
## Training
To run training, pass a yaml config file, followed by a list of overridden arguments.
For example, to train NeRF on the first skateboard sequence from CO3D dataset, you can run:
```shell
dataset_args=data_source_args.dataset_map_provider_JsonIndexDatasetMapProvider_args
pytorch3d_implicitron_runner --config-path ./configs/ --config-name repro_singleseq_nerf $dataset_args.dataset_root=<DATASET_ROOT> $dataset_args.category='skateboard' $dataset_args.test_restrict_sequence_id=0 test_when_finished=True exp_dir=<CHECKPOINT_DIR>
```
Here, `--config-path` points to the config path relative to `pytorch3d_implicitron_runner` location;
`--config-name` picks the config (in this case, `repro_singleseq_nerf.yaml`);
`test_when_finished` will launch evaluation script once training is finished.
Replace `<DATASET_ROOT>` with the location where the dataset in Implicitron format is stored
and `<CHECKPOINT_DIR>` with a directory where checkpoints will be dumped during training.
Other configuration parameters can be overridden in the same way.
See [Configuration system](#configuration-system) section for more information on this.
## Evaluation
To run evaluation on the latest checkpoint after (or during) training, simply add `eval_only=True` to your training command.
E.g. for executing the evaluation on the NeRF skateboard sequence, you can run:
```shell
dataset_args=data_source_args.dataset_map_provider_JsonIndexDatasetMapProvider_args
pytorch3d_implicitron_runner --config-path ./configs/ --config-name repro_singleseq_nerf $dataset_args.dataset_root=<CO3D_DATASET_ROOT> $dataset_args.category='skateboard' $dataset_args.test_restrict_sequence_id=0 exp_dir=<CHECKPOINT_DIR> eval_only=True
```
Evaluation prints the metrics to `stdout` and dumps them to a json file in `exp_dir`.
## Visualisation
The script produces a video of renders by a trained model assuming a pre-defined camera trajectory.
In order for it to work, `ffmpeg` needs to be installed:
```shell
conda install ffmpeg
```
Here is an example of calling the script:
```shell
projects/implicitron_trainer/visualize_reconstruction.py exp_dir=<CHECKPOINT_DIR> visdom_show_preds=True n_eval_cameras=40 render_size="[64,64]" video_size="[256,256]"
```
The argument `n_eval_cameras` sets the number of renderring viewpoints sampled on a trajectory, which defaults to a circular fly-around;
`render_size` sets the size of a render passed to the model, which can be resized to `video_size` before writing.
Rendered videos of images, masks, and depth maps will be saved to `<CHECKPOINT_DIR>/vis`.
# Configuration system
We use hydra and OmegaConf to parse the configs.
The config schema and default values are defined by the dataclasses implementing the modules.
More specifically, if a class derives from `Configurable`, its fields can be set in config yaml files or overridden in CLI.
For example, `GenericModel` has a field `render_image_width` with the default value 400.
If it is specified in the yaml config file or in CLI command, the new value will be used.
Configurables can form hierarchies.
For example, `GenericModel` has a field `raysampler: RaySampler`, which is also Configurable.
In the config, inner parameters can be propagated using `_args` postfix, e.g. to change `raysampler.n_pts_per_ray_training` (the number of sampled points per ray), the node `raysampler_args.n_pts_per_ray_training` should be specified.
The root of the hierarchy is defined by `ExperimentConfig` dataclass.
It has top-level fields like `eval_only` which was used above for running evaluation by adding a CLI override.
Additionally, it has non-leaf nodes like `generic_model_args`, which dispatches the config parameters to `GenericModel`. Thus, changing the model parameters may be achieved in two ways: either by editing the config file, e.g.
```yaml
generic_model_args:
render_image_width: 800
raysampler_args:
n_pts_per_ray_training: 128
```
or, equivalently, by adding the following to `pytorch3d_implicitron_runner` arguments:
```shell
generic_model_args.render_image_width=800 generic_model_args.raysampler_args.n_pts_per_ray_training=128
```
See the documentation in `pytorch3d/implicitron/tools/config.py` for more details.
## Replaceable implementations
Sometimes changing the model parameters does not provide enough flexibility, and you want to provide a new implementation for a building block.
The configuration system also supports it!
Abstract classes like `BaseRenderer` derive from `ReplaceableBase` instead of `Configurable`.
This means that other Configurables can refer to them using the base type, while the specific implementation is chosen in the config using `_class_type`-postfixed node.
In that case, `_args` node name has to include the implementation type.
More specifically, to change renderer settings, the config will look like this:
```yaml
generic_model_args:
renderer_class_type: LSTMRenderer
renderer_LSTMRenderer_args:
num_raymarch_steps: 10
hidden_size: 16
```
See the documentation in `pytorch3d/implicitron/tools/config.py` for more details on the configuration system.
## Custom plugins
If you have an idea for another implementation of a replaceable component, it can be plugged in without changing the core code.
For that, you need to set up Implicitron through option 2 or 3 above.
Let's say you want to implement a renderer that accumulates opacities similar to an X-ray machine.
First, create a module `x_ray_renderer.py` with a class deriving from `BaseRenderer`:
```python
from pytorch3d.implicitron.tools.config import registry
@registry.register
class XRayRenderer(BaseRenderer, torch.nn.Module):
n_pts_per_ray: int = 64
# if there are other base classes, make sure to call `super().__init__()` explicitly
def __post_init__(self):
super().__init__()
# custom initialization
def forward(
self,
ray_bundle,
implicit_functions=[],
evaluation_mode: EvaluationMode = EvaluationMode.EVALUATION,
**kwargs,
) -> RendererOutput:
...
```
Please note `@registry.register` decorator that registers the plug-in as an implementation of `Renderer`.
IMPORTANT: In order for it to run, the class (or its enclosing module) has to be imported in your launch script. Additionally, this has to be done before parsing the root configuration class `ExperimentConfig`.
Simply add `import .x_ray_renderer` in the beginning of `experiment.py`.
After that, you should be able to change the config with:
```yaml
generic_model_args:
renderer_class_type: XRayRenderer
renderer_XRayRenderer_args:
n_pts_per_ray: 128
```
to replace the implementation and potentially override the parameters.
# Code and config structure
As per above, the config structure is parsed automatically from the module hierarchy.
In particular, model parameters are contained in `generic_model_args` node, and dataset parameters in `data_source_args` node.
Here is the class structure (single-line edges show aggregation, while double lines show available implementations):
```
generic_model_args: GenericModel
└-- sequence_autodecoder_args: Autodecoder
└-- raysampler_args: RaySampler
└-- renderer_*_args: BaseRenderer
╘== MultiPassEmissionAbsorptionRenderer
╘== LSTMRenderer
╘== SignedDistanceFunctionRenderer
└-- ray_tracer_args: RayTracing
└-- ray_normal_coloring_network_args: RayNormalColoringNetwork
└-- implicit_function_*_args: ImplicitFunctionBase
╘== NeuralRadianceFieldImplicitFunction
╘== SRNImplicitFunction
└-- raymarch_function_args: SRNRaymarchFunction
└-- pixel_generator_args: SRNPixelGenerator
╘== SRNHyperNetImplicitFunction
└-- hypernet_args: SRNRaymarchHyperNet
└-- pixel_generator_args: SRNPixelGenerator
╘== IdrFeatureField
└-- image_feature_extractor_*_args: FeatureExtractorBase
╘== ResNetFeatureExtractor
└-- view_sampler_args: ViewSampler
└-- feature_aggregator_*_args: FeatureAggregatorBase
╘== IdentityFeatureAggregator
╘== AngleWeightedIdentityFeatureAggregator
╘== AngleWeightedReductionFeatureAggregator
╘== ReductionFeatureAggregator
solver_args: init_optimizer
data_source_args: ImplicitronDataSource
└-- dataset_map_provider_*_args
└-- data_loader_map_provider_*_args
```
Please look at the annotations of the respective classes or functions for the lists of hyperparameters.
# Reproducing CO3D experiments
Common Objects in 3D (CO3D) is a large-scale dataset of videos of rigid objects grouped into 50 common categories.
Implicitron provides implementations and config files to reproduce the results from [the paper](https://arxiv.org/abs/2109.00512).
Please follow [the link](https://github.com/facebookresearch/co3d#automatic-batch-download) for the instructions to download the dataset.
In training and evaluation scripts, use the download location as `<DATASET_ROOT>`.
It is also possible to define environment variable `CO3D_DATASET_ROOT` instead of specifying it.
To reproduce the experiments from the paper, use the following configs. For single-sequence experiments:
| Method | config file |
|-----------------|-------------------------------------|
| NeRF | repro_singleseq_nerf.yaml |
| NeRF + WCE | repro_singleseq_nerf_wce.yaml |
| NerFormer | repro_singleseq_nerformer.yaml |
| IDR | repro_singleseq_idr.yaml |
| SRN | repro_singleseq_srn_noharm.yaml |
| SRN + γ | repro_singleseq_srn.yaml |
| SRN + WCE | repro_singleseq_srn_wce_noharm.yaml |
| SRN + WCE + γ | repro_singleseq_srn_wce_noharm.yaml |
For multi-sequence experiments (without generalisation to new sequences):
| Method | config file |
|-----------------|--------------------------------------------|
| NeRF + AD | repro_multiseq_nerf_ad.yaml |
| SRN + AD | repro_multiseq_srn_ad_hypernet_noharm.yaml |
| SRN + γ + AD | repro_multiseq_srn_ad_hypernet.yaml |
For multi-sequence experiments (with generalisation to new sequences):
| Method | config file |
|-----------------|--------------------------------------|
| NeRF + WCE | repro_multiseq_nerf_wce.yaml |
| NerFormer | repro_multiseq_nerformer.yaml |
| SRN + WCE | repro_multiseq_srn_wce_noharm.yaml |
| SRN + WCE + γ | repro_multiseq_srn_wce.yaml |

View File

@@ -0,0 +1,5 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.

View File

@@ -0,0 +1,75 @@
defaults:
- default_config
- _self_
exp_dir: ./data/exps/base/
architecture: generic
visualize_interval: 0
visdom_port: 8097
data_source_args:
data_loader_map_provider_class_type: SequenceDataLoaderMapProvider
dataset_map_provider_class_type: JsonIndexDatasetMapProvider
data_loader_map_provider_SequenceDataLoaderMapProvider_args:
dataset_length_train: 1000
dataset_length_val: 1
num_workers: 8
dataset_map_provider_JsonIndexDatasetMapProvider_args:
dataset_root: ${oc.env:CO3D_DATASET_ROOT}
n_frames_per_sequence: -1
test_on_train: true
test_restrict_sequence_id: 0
dataset_JsonIndexDataset_args:
load_point_clouds: false
mask_depths: false
mask_images: false
generic_model_args:
loss_weights:
loss_mask_bce: 1.0
loss_prev_stage_mask_bce: 1.0
loss_autodecoder_norm: 0.01
loss_rgb_mse: 1.0
loss_prev_stage_rgb_mse: 1.0
output_rasterized_mc: false
chunk_size_grid: 102400
render_image_height: 400
render_image_width: 400
num_passes: 2
implicit_function_NeuralRadianceFieldImplicitFunction_args:
n_harmonic_functions_xyz: 10
n_harmonic_functions_dir: 4
n_hidden_neurons_xyz: 256
n_hidden_neurons_dir: 128
n_layers_xyz: 8
append_xyz:
- 5
latent_dim: 0
raysampler_AdaptiveRaySampler_args:
n_rays_per_image_sampled_from_mask: 1024
scene_extent: 8.0
n_pts_per_ray_training: 64
n_pts_per_ray_evaluation: 64
stratified_point_sampling_training: true
stratified_point_sampling_evaluation: false
renderer_MultiPassEmissionAbsorptionRenderer_args:
n_pts_per_ray_fine_training: 64
n_pts_per_ray_fine_evaluation: 64
append_coarse_samples_to_fine: true
density_noise_std_train: 1.0
view_pooler_args:
view_sampler_args:
masked_sampling: false
image_feature_extractor_ResNetFeatureExtractor_args:
stages:
- 1
- 2
- 3
- 4
proj_dim: 16
image_rescale: 0.32
first_max_pool: false
solver_args:
breed: adam
lr: 0.0005
lr_policy: multistep
max_epochs: 2000
momentum: 0.9
weight_decay: 0.0

View File

@@ -0,0 +1,17 @@
generic_model_args:
image_feature_extractor_class_type: ResNetFeatureExtractor
image_feature_extractor_ResNetFeatureExtractor_args:
add_images: true
add_masks: true
first_max_pool: true
image_rescale: 0.375
l2_norm: true
name: resnet34
normalize_image: true
pretrained: true
stages:
- 1
- 2
- 3
- 4
proj_dim: 32

View File

@@ -0,0 +1,17 @@
generic_model_args:
image_feature_extractor_class_type: ResNetFeatureExtractor
image_feature_extractor_ResNetFeatureExtractor_args:
add_images: true
add_masks: true
first_max_pool: false
image_rescale: 0.375
l2_norm: true
name: resnet34
normalize_image: true
pretrained: true
stages:
- 1
- 2
- 3
- 4
proj_dim: 16

View File

@@ -0,0 +1,18 @@
generic_model_args:
image_feature_extractor_class_type: ResNetFeatureExtractor
image_feature_extractor_ResNetFeatureExtractor_args:
stages:
- 1
- 2
- 3
first_max_pool: false
proj_dim: -1
l2_norm: false
image_rescale: 0.375
name: resnet34
normalize_image: true
pretrained: true
view_pooler_args:
feature_aggregator_AngleWeightedReductionFeatureAggregator_args:
reduction_functions:
- AVG

View File

@@ -0,0 +1,35 @@
defaults:
- repro_base.yaml
- _self_
data_source_args:
data_loader_map_provider_SequenceDataLoaderMapProvider_args:
batch_size: 10
dataset_length_train: 1000
dataset_length_val: 1
num_workers: 8
train_conditioning_type: SAME
val_conditioning_type: SAME
test_conditioning_type: SAME
images_per_seq_options:
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
dataset_map_provider_JsonIndexDatasetMapProvider_args:
assert_single_seq: false
task_str: multisequence
n_frames_per_sequence: -1
test_on_train: true
test_restrict_sequence_id: 0
solver_args:
max_epochs: 3000
milestones:
- 1000
camera_difficulty_bin_breaks:
- 0.666667
- 0.833334

View File

@@ -0,0 +1,65 @@
defaults:
- repro_multiseq_base.yaml
- _self_
generic_model_args:
loss_weights:
loss_mask_bce: 100.0
loss_kl: 0.0
loss_rgb_mse: 1.0
loss_eikonal: 0.1
chunk_size_grid: 65536
num_passes: 1
output_rasterized_mc: true
sampling_mode_training: mask_sample
global_encoder_class_type: SequenceAutodecoder
global_encoder_SequenceAutodecoder_args:
autodecoder_args:
n_instances: 20000
init_scale: 1.0
encoding_dim: 256
implicit_function_IdrFeatureField_args:
n_harmonic_functions_xyz: 6
bias: 0.6
d_in: 3
d_out: 1
dims:
- 512
- 512
- 512
- 512
- 512
- 512
- 512
- 512
geometric_init: true
pooled_feature_dim: 0
skip_in:
- 6
weight_norm: true
renderer_SignedDistanceFunctionRenderer_args:
ray_tracer_args:
line_search_step: 0.5
line_step_iters: 3
n_secant_steps: 8
n_steps: 100
object_bounding_sphere: 8.0
sdf_threshold: 5.0e-05
ray_normal_coloring_network_args:
d_in: 9
d_out: 3
dims:
- 512
- 512
- 512
- 512
mode: idr
n_harmonic_functions_dir: 4
pooled_feature_dim: 0
weight_norm: true
raysampler_AdaptiveRaySampler_args:
n_rays_per_image_sampled_from_mask: 1024
n_pts_per_ray_training: 0
n_pts_per_ray_evaluation: 0
scene_extent: 8.0
renderer_class_type: SignedDistanceFunctionRenderer
implicit_function_class_type: IdrFeatureField

View File

@@ -0,0 +1,11 @@
defaults:
- repro_multiseq_base.yaml
- _self_
generic_model_args:
chunk_size_grid: 16000
view_pooler_enabled: false
global_encoder_class_type: SequenceAutodecoder
global_encoder_SequenceAutodecoder_args:
autodecoder_args:
n_instances: 20000
encoding_dim: 256

View File

@@ -0,0 +1,10 @@
defaults:
- repro_multiseq_base.yaml
- repro_feat_extractor_unnormed.yaml
- _self_
clip_grad: 1.0
generic_model_args:
chunk_size_grid: 16000
view_pooler_enabled: true
raysampler_AdaptiveRaySampler_args:
n_rays_per_image_sampled_from_mask: 850

View File

@@ -0,0 +1,17 @@
defaults:
- repro_multiseq_base.yaml
- repro_feat_extractor_transformer.yaml
- _self_
generic_model_args:
chunk_size_grid: 16000
raysampler_AdaptiveRaySampler_args:
n_rays_per_image_sampled_from_mask: 800
n_pts_per_ray_training: 32
n_pts_per_ray_evaluation: 32
renderer_MultiPassEmissionAbsorptionRenderer_args:
n_pts_per_ray_fine_training: 16
n_pts_per_ray_fine_evaluation: 16
implicit_function_class_type: NeRFormerImplicitFunction
view_pooler_enabled: true
view_pooler_args:
feature_aggregator_class_type: IdentityFeatureAggregator

View File

@@ -0,0 +1,6 @@
defaults:
- repro_multiseq_nerformer.yaml
- _self_
generic_model_args:
view_pooler_args:
feature_aggregator_class_type: AngleWeightedIdentityFeatureAggregator

View File

@@ -0,0 +1,34 @@
defaults:
- repro_multiseq_base.yaml
- _self_
generic_model_args:
chunk_size_grid: 16000
view_pooler_enabled: false
n_train_target_views: -1
num_passes: 1
loss_weights:
loss_rgb_mse: 200.0
loss_prev_stage_rgb_mse: 0.0
loss_mask_bce: 1.0
loss_prev_stage_mask_bce: 0.0
loss_autodecoder_norm: 0.001
depth_neg_penalty: 10000.0
global_encoder_class_type: SequenceAutodecoder
global_encoder_SequenceAutodecoder_args:
autodecoder_args:
encoding_dim: 256
n_instances: 20000
raysampler_class_type: NearFarRaySampler
raysampler_NearFarRaySampler_args:
n_rays_per_image_sampled_from_mask: 2048
min_depth: 0.05
max_depth: 0.05
n_pts_per_ray_training: 1
n_pts_per_ray_evaluation: 1
stratified_point_sampling_training: false
stratified_point_sampling_evaluation: false
renderer_class_type: LSTMRenderer
implicit_function_class_type: SRNHyperNetImplicitFunction
solver_args:
breed: adam
lr: 5.0e-05

View File

@@ -0,0 +1,10 @@
defaults:
- repro_multiseq_srn_ad_hypernet.yaml
- _self_
generic_model_args:
num_passes: 1
implicit_function_SRNHyperNetImplicitFunction_args:
pixel_generator_args:
n_harmonic_functions: 0
hypernet_args:
n_harmonic_functions: 0

View File

@@ -0,0 +1,30 @@
defaults:
- repro_multiseq_base.yaml
- repro_feat_extractor_normed.yaml
- _self_
generic_model_args:
chunk_size_grid: 32000
num_passes: 1
n_train_target_views: -1
loss_weights:
loss_rgb_mse: 200.0
loss_prev_stage_rgb_mse: 0.0
loss_mask_bce: 1.0
loss_prev_stage_mask_bce: 0.0
loss_autodecoder_norm: 0.0
depth_neg_penalty: 10000.0
raysampler_class_type: NearFarRaySampler
raysampler_NearFarRaySampler_args:
n_rays_per_image_sampled_from_mask: 2048
min_depth: 0.05
max_depth: 0.05
n_pts_per_ray_training: 1
n_pts_per_ray_evaluation: 1
stratified_point_sampling_training: false
stratified_point_sampling_evaluation: false
renderer_class_type: LSTMRenderer
implicit_function_class_type: SRNImplicitFunction
view_pooler_enabled: true
solver_args:
breed: adam
lr: 5.0e-05

View File

@@ -0,0 +1,10 @@
defaults:
- repro_multiseq_srn_wce.yaml
- _self_
generic_model_args:
num_passes: 1
implicit_function_SRNImplicitFunction_args:
pixel_generator_args:
n_harmonic_functions: 0
raymarch_function_args:
n_harmonic_functions: 0

View File

@@ -0,0 +1,39 @@
defaults:
- repro_base
- _self_
data_source_args:
data_loader_map_provider_SequenceDataLoaderMapProvider_args:
batch_size: 1
dataset_length_train: 1000
dataset_length_val: 1
num_workers: 8
dataset_map_provider_JsonIndexDatasetMapProvider_args:
assert_single_seq: true
n_frames_per_sequence: -1
test_restrict_sequence_id: 0
test_on_train: false
generic_model_args:
render_image_height: 800
render_image_width: 800
log_vars:
- loss_rgb_psnr_fg
- loss_rgb_psnr
- loss_eikonal
- loss_prev_stage_rgb_psnr
- loss_mask_bce
- loss_prev_stage_mask_bce
- loss_rgb_mse
- loss_prev_stage_rgb_mse
- loss_depth_abs
- loss_depth_abs_fg
- loss_kl
- loss_mask_neg_iou
- objective
- epoch
- sec/it
solver_args:
lr: 0.0005
max_epochs: 400
milestones:
- 200
- 300

View File

@@ -0,0 +1,57 @@
defaults:
- repro_singleseq_base
- _self_
generic_model_args:
loss_weights:
loss_mask_bce: 100.0
loss_kl: 0.0
loss_rgb_mse: 1.0
loss_eikonal: 0.1
chunk_size_grid: 65536
num_passes: 1
view_pooler_enabled: false
implicit_function_IdrFeatureField_args:
n_harmonic_functions_xyz: 6
bias: 0.6
d_in: 3
d_out: 1
dims:
- 512
- 512
- 512
- 512
- 512
- 512
- 512
- 512
geometric_init: true
pooled_feature_dim: 0
skip_in:
- 6
weight_norm: true
renderer_SignedDistanceFunctionRenderer_args:
ray_tracer_args:
line_search_step: 0.5
line_step_iters: 3
n_secant_steps: 8
n_steps: 100
object_bounding_sphere: 8.0
sdf_threshold: 5.0e-05
ray_normal_coloring_network_args:
d_in: 9
d_out: 3
dims:
- 512
- 512
- 512
- 512
mode: idr
n_harmonic_functions_dir: 4
pooled_feature_dim: 0
weight_norm: true
raysampler_AdaptiveRaySampler_args:
n_rays_per_image_sampled_from_mask: 1024
n_pts_per_ray_training: 0
n_pts_per_ray_evaluation: 0
renderer_class_type: SignedDistanceFunctionRenderer
implicit_function_class_type: IdrFeatureField

View File

@@ -0,0 +1,3 @@
defaults:
- repro_singleseq_base
- _self_

View File

@@ -0,0 +1,9 @@
defaults:
- repro_singleseq_wce_base.yaml
- repro_feat_extractor_unnormed.yaml
- _self_
generic_model_args:
chunk_size_grid: 16000
view_pooler_enabled: true
raysampler_AdaptiveRaySampler_args:
n_rays_per_image_sampled_from_mask: 850

View File

@@ -0,0 +1,17 @@
defaults:
- repro_singleseq_wce_base.yaml
- repro_feat_extractor_transformer.yaml
- _self_
generic_model_args:
chunk_size_grid: 16000
view_pooler_enabled: true
implicit_function_class_type: NeRFormerImplicitFunction
raysampler_AdaptiveRaySampler_args:
n_rays_per_image_sampled_from_mask: 800
n_pts_per_ray_training: 32
n_pts_per_ray_evaluation: 32
renderer_MultiPassEmissionAbsorptionRenderer_args:
n_pts_per_ray_fine_training: 16
n_pts_per_ray_fine_evaluation: 16
view_pooler_args:
feature_aggregator_class_type: IdentityFeatureAggregator

View File

@@ -0,0 +1,28 @@
defaults:
- repro_singleseq_base.yaml
- _self_
generic_model_args:
num_passes: 1
chunk_size_grid: 32000
view_pooler_enabled: false
loss_weights:
loss_rgb_mse: 200.0
loss_prev_stage_rgb_mse: 0.0
loss_mask_bce: 1.0
loss_prev_stage_mask_bce: 0.0
loss_autodecoder_norm: 0.0
depth_neg_penalty: 10000.0
raysampler_class_type: NearFarRaySampler
raysampler_NearFarRaySampler_args:
n_rays_per_image_sampled_from_mask: 2048
min_depth: 0.05
max_depth: 0.05
n_pts_per_ray_training: 1
n_pts_per_ray_evaluation: 1
stratified_point_sampling_training: false
stratified_point_sampling_evaluation: false
renderer_class_type: LSTMRenderer
implicit_function_class_type: SRNImplicitFunction
solver_args:
breed: adam
lr: 5.0e-05

View File

@@ -0,0 +1,10 @@
defaults:
- repro_singleseq_srn.yaml
- _self_
generic_model_args:
num_passes: 1
implicit_function_SRNImplicitFunction_args:
pixel_generator_args:
n_harmonic_functions: 0
raymarch_function_args:
n_harmonic_functions: 0

View File

@@ -0,0 +1,29 @@
defaults:
- repro_singleseq_wce_base
- repro_feat_extractor_normed.yaml
- _self_
generic_model_args:
num_passes: 1
chunk_size_grid: 32000
view_pooler_enabled: true
loss_weights:
loss_rgb_mse: 200.0
loss_prev_stage_rgb_mse: 0.0
loss_mask_bce: 1.0
loss_prev_stage_mask_bce: 0.0
loss_autodecoder_norm: 0.0
depth_neg_penalty: 10000.0
raysampler_class_type: NearFarRaySampler
raysampler_NearFarRaySampler_args:
n_rays_per_image_sampled_from_mask: 2048
min_depth: 0.05
max_depth: 0.05
n_pts_per_ray_training: 1
n_pts_per_ray_evaluation: 1
stratified_point_sampling_training: false
stratified_point_sampling_evaluation: false
renderer_class_type: LSTMRenderer
implicit_function_class_type: SRNImplicitFunction
solver_args:
breed: adam
lr: 5.0e-05

View File

@@ -0,0 +1,10 @@
defaults:
- repro_singleseq_srn_wce.yaml
- _self_
generic_model_args:
num_passes: 1
implicit_function_SRNImplicitFunction_args:
pixel_generator_args:
n_harmonic_functions: 0
raymarch_function_args:
n_harmonic_functions: 0

View File

@@ -0,0 +1,22 @@
defaults:
- repro_singleseq_base
- _self_
data_source_args:
data_loader_map_provider_SequenceDataLoaderMapProvider_args:
batch_size: 10
dataset_length_train: 1000
dataset_length_val: 1
num_workers: 8
train_conditioning_type: SAME
val_conditioning_type: SAME
test_conditioning_type: SAME
images_per_seq_options:
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10

View File

@@ -0,0 +1,706 @@
#!/usr/bin/env python
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
""""
This file is the entry point for launching experiments with Implicitron.
Main functions
---------------
- `run_training` is the wrapper for the train, val, test loops
and checkpointing
- `trainvalidate` is the inner loop which runs the model forward/backward
pass, visualizations and metric printing
Launch Training
---------------
Experiment config .yaml files are located in the
`projects/implicitron_trainer/configs` folder. To launch
an experiment, specify the name of the file. Specific config values can
also be overridden from the command line, for example:
```
./experiment.py --config-name base_config.yaml override.param.one=42 override.param.two=84
```
To run an experiment on a specific GPU, specify the `gpu_idx` key
in the config file / CLI. To run on a different device, specify the
device in `run_training`.
Outputs
--------
The outputs of the experiment are saved and logged in multiple ways:
- Checkpoints:
Model, optimizer and stats are stored in the directory
named by the `exp_dir` key from the config file / CLI parameters.
- Stats
Stats are logged and plotted to the file "train_stats.pdf" in the
same directory. The stats are also saved as part of the checkpoint file.
- Visualizations
Prredictions are plotted to a visdom server running at the
port specified by the `visdom_server` and `visdom_port` keys in the
config file.
"""
import copy
import json
import logging
import os
import random
import time
import warnings
from typing import Any, Dict, Optional, Tuple
import hydra
import lpips
import numpy as np
import torch
import tqdm
from accelerate import Accelerator
from omegaconf import DictConfig, OmegaConf
from packaging import version
from pytorch3d.implicitron.dataset import utils as ds_utils
from pytorch3d.implicitron.dataset.data_loader_map_provider import DataLoaderMap
from pytorch3d.implicitron.dataset.data_source import ImplicitronDataSource, Task
from pytorch3d.implicitron.dataset.dataset_map_provider import DatasetMap
from pytorch3d.implicitron.evaluation import evaluate_new_view_synthesis as evaluate
from pytorch3d.implicitron.models.generic_model import EvaluationMode, GenericModel
from pytorch3d.implicitron.models.renderer.multipass_ea import (
MultiPassEmissionAbsorptionRenderer,
)
from pytorch3d.implicitron.models.renderer.ray_sampler import AdaptiveRaySampler
from pytorch3d.implicitron.tools import model_io, vis_utils
from pytorch3d.implicitron.tools.config import (
expand_args_fields,
remove_unused_components,
)
from pytorch3d.implicitron.tools.stats import Stats
from pytorch3d.renderer.cameras import CamerasBase
from .impl.experiment_config import ExperimentConfig
from .impl.optimization import init_optimizer
logger = logging.getLogger(__name__)
if version.parse(hydra.__version__) < version.Version("1.1"):
raise ValueError(
f"Hydra version {hydra.__version__} is too old."
" (Implicitron requires version 1.1 or later.)"
)
try:
# only makes sense in FAIR cluster
import pytorch3d.implicitron.fair_cluster.slurm # noqa: F401
except ModuleNotFoundError:
pass
no_accelerate = os.environ.get("PYTORCH3D_NO_ACCELERATE") is not None
def init_model(
*,
cfg: DictConfig,
accelerator: Optional[Accelerator] = None,
force_load: bool = False,
clear_stats: bool = False,
load_model_only: bool = False,
) -> Tuple[GenericModel, Stats, Optional[Dict[str, Any]]]:
"""
Returns an instance of `GenericModel`.
If `cfg.resume` is set or `force_load` is true,
attempts to load the last checkpoint from `cfg.exp_dir`. Failure to do so
will return the model with initial weights, unless `force_load` is passed,
in which case a FileNotFoundError is raised.
Args:
force_load: If true, force load model from checkpoint even if
cfg.resume is false.
clear_stats: If true, clear the stats object loaded from checkpoint
load_model_only: If true, load only the model weights from checkpoint
and do not load the state of the optimizer and stats.
Returns:
model: The model with optionally loaded weights from checkpoint
stats: The stats structure (optionally loaded from checkpoint)
optimizer_state: The optimizer state dict containing
`state` and `param_groups` keys (optionally loaded from checkpoint)
Raise:
FileNotFoundError if `force_load` is passed but checkpoint is not found.
"""
# Initialize the model
if cfg.architecture == "generic":
model = GenericModel(**cfg.generic_model_args)
else:
raise ValueError(f"No such arch {cfg.architecture}.")
# Determine the network outputs that should be logged
if hasattr(model, "log_vars"):
log_vars = copy.deepcopy(list(model.log_vars))
else:
log_vars = ["objective"]
visdom_env_charts = vis_utils.get_visdom_env(cfg) + "_charts"
# Init the stats struct
stats = Stats(
log_vars,
visdom_env=visdom_env_charts,
verbose=False,
visdom_server=cfg.visdom_server,
visdom_port=cfg.visdom_port,
)
# Retrieve the last checkpoint
if cfg.resume_epoch > 0:
model_path = model_io.get_checkpoint(cfg.exp_dir, cfg.resume_epoch)
else:
model_path = model_io.find_last_checkpoint(cfg.exp_dir)
optimizer_state = None
if model_path is not None:
logger.info("found previous model %s" % model_path)
if force_load or cfg.resume:
logger.info(" -> resuming")
map_location = None
if accelerator is not None and not accelerator.is_local_main_process:
map_location = {
"cuda:%d" % 0: "cuda:%d" % accelerator.local_process_index
}
if load_model_only:
model_state_dict = torch.load(
model_io.get_model_path(model_path), map_location=map_location
)
stats_load, optimizer_state = None, None
else:
model_state_dict, stats_load, optimizer_state = model_io.load_model(
model_path, map_location=map_location
)
# Determine if stats should be reset
if not clear_stats:
if stats_load is None:
logger.info("\n\n\n\nCORRUPT STATS -> clearing stats\n\n\n\n")
last_epoch = model_io.parse_epoch_from_model_path(model_path)
logger.info(f"Estimated resume epoch = {last_epoch}")
# Reset the stats struct
for _ in range(last_epoch + 1):
stats.new_epoch()
assert last_epoch == stats.epoch
else:
stats = stats_load
# Update stats properties incase it was reset on load
stats.visdom_env = visdom_env_charts
stats.visdom_server = cfg.visdom_server
stats.visdom_port = cfg.visdom_port
stats.plot_file = os.path.join(cfg.exp_dir, "train_stats.pdf")
stats.synchronize_logged_vars(log_vars)
else:
logger.info(" -> clearing stats")
try:
# TODO: fix on creation of the buffers
# after the hack above, this will not pass in most cases
# ... but this is fine for now
model.load_state_dict(model_state_dict, strict=True)
except RuntimeError as e:
logger.error(e)
logger.info("Cant load state dict in strict mode! -> trying non-strict")
model.load_state_dict(model_state_dict, strict=False)
model.log_vars = log_vars
else:
logger.info(" -> but not resuming -> starting from scratch")
elif force_load:
raise FileNotFoundError(f"Cannot find a checkpoint in {cfg.exp_dir}!")
return model, stats, optimizer_state
def trainvalidate(
model,
stats,
epoch,
loader,
optimizer,
validation: bool,
*,
accelerator: Optional[Accelerator],
device: torch.device,
bp_var: str = "objective",
metric_print_interval: int = 5,
visualize_interval: int = 100,
visdom_env_root: str = "trainvalidate",
clip_grad: float = 0.0,
**kwargs,
) -> None:
"""
This is the main loop for training and evaluation including:
model forward pass, loss computation, backward pass and visualization.
Args:
model: The model module optionally loaded from checkpoint
stats: The stats struct, also optionally loaded from checkpoint
epoch: The index of the current epoch
loader: The dataloader to use for the loop
optimizer: The optimizer module optionally loaded from checkpoint
validation: If true, run the loop with the model in eval mode
and skip the backward pass
bp_var: The name of the key in the model output `preds` dict which
should be used as the loss for the backward pass.
metric_print_interval: The batch interval at which the stats should be
logged.
visualize_interval: The batch interval at which the visualizations
should be plotted
visdom_env_root: The name of the visdom environment to use for plotting
clip_grad: Optionally clip the gradient norms.
If set to a value <=0.0, no clipping
device: The device on which to run the model.
Returns:
None
"""
if validation:
model.eval()
trainmode = "val"
else:
model.train()
trainmode = "train"
t_start = time.time()
# get the visdom env name
visdom_env_imgs = visdom_env_root + "_images_" + trainmode
viz = vis_utils.get_visdom_connection(
server=stats.visdom_server,
port=stats.visdom_port,
)
# Iterate through the batches
n_batches = len(loader)
for it, net_input in enumerate(loader):
last_iter = it == n_batches - 1
# move to gpu where possible (in place)
net_input = net_input.to(device)
# run the forward pass
if not validation:
optimizer.zero_grad()
preds = model(**{**net_input, "evaluation_mode": EvaluationMode.TRAINING})
else:
with torch.no_grad():
preds = model(
**{**net_input, "evaluation_mode": EvaluationMode.EVALUATION}
)
# make sure we dont overwrite something
assert all(k not in preds for k in net_input.keys())
# merge everything into one big dict
preds.update(net_input)
# update the stats logger
stats.update(preds, time_start=t_start, stat_set=trainmode)
assert stats.it[trainmode] == it, "inconsistent stat iteration number!"
# print textual status update
if it % metric_print_interval == 0 or last_iter:
stats.print(stat_set=trainmode, max_it=n_batches)
# visualize results
if (
(accelerator is None or accelerator.is_local_main_process)
and visualize_interval > 0
and it % visualize_interval == 0
):
prefix = f"e{stats.epoch}_it{stats.it[trainmode]}"
model.visualize(
viz,
visdom_env_imgs,
preds,
prefix,
)
# optimizer step
if not validation:
loss = preds[bp_var]
assert torch.isfinite(loss).all(), "Non-finite loss!"
# backprop
if accelerator is None:
loss.backward()
else:
accelerator.backward(loss)
if clip_grad > 0.0:
# Optionally clip the gradient norms.
total_norm = torch.nn.utils.clip_grad_norm(
model.parameters(), clip_grad
)
if total_norm > clip_grad:
logger.info(
f"Clipping gradient: {total_norm}"
+ f" with coef {clip_grad / float(total_norm)}."
)
optimizer.step()
def run_training(cfg: DictConfig) -> None:
"""
Entry point to run the training and validation loops
based on the specified config file.
"""
# Initialize the accelerator
if no_accelerate:
accelerator = None
device = torch.device("cuda:0")
else:
accelerator = Accelerator(device_placement=False)
logger.info(accelerator.state)
device = accelerator.device
logger.info(f"Running experiment on device: {device}")
# set the debug mode
if cfg.detect_anomaly:
logger.info("Anomaly detection!")
torch.autograd.set_detect_anomaly(cfg.detect_anomaly)
# create the output folder
os.makedirs(cfg.exp_dir, exist_ok=True)
_seed_all_random_engines(cfg.seed)
remove_unused_components(cfg)
# dump the exp config to the exp dir
try:
cfg_filename = os.path.join(cfg.exp_dir, "expconfig.yaml")
OmegaConf.save(config=cfg, f=cfg_filename)
except PermissionError:
warnings.warn("Cant dump config due to insufficient permissions!")
# setup datasets
datasource = ImplicitronDataSource(**cfg.data_source_args)
datasets, dataloaders = datasource.get_datasets_and_dataloaders()
task = datasource.get_task()
# init the model
model, stats, optimizer_state = init_model(cfg=cfg, accelerator=accelerator)
start_epoch = stats.epoch + 1
# move model to gpu
model.to(device)
# only run evaluation on the test dataloader
if cfg.eval_only:
_eval_and_dump(
cfg,
task,
datasource.all_train_cameras,
datasets,
dataloaders,
model,
stats,
device=device,
)
return
# init the optimizer
optimizer, scheduler = init_optimizer(
model,
optimizer_state=optimizer_state,
last_epoch=start_epoch,
**cfg.solver_args,
)
# check the scheduler and stats have been initialized correctly
assert scheduler.last_epoch == stats.epoch + 1
assert scheduler.last_epoch == start_epoch
# Wrap all modules in the distributed library
# Note: we don't pass the scheduler to prepare as it
# doesn't need to be stepped at each optimizer step
train_loader = dataloaders.train
val_loader = dataloaders.val
if accelerator is not None:
(
model,
optimizer,
train_loader,
val_loader,
) = accelerator.prepare(model, optimizer, train_loader, val_loader)
past_scheduler_lrs = []
# loop through epochs
for epoch in range(start_epoch, cfg.solver_args.max_epochs):
# automatic new_epoch and plotting of stats at every epoch start
with stats:
# Make sure to re-seed random generators to ensure reproducibility
# even after restart.
_seed_all_random_engines(cfg.seed + epoch)
cur_lr = float(scheduler.get_last_lr()[-1])
logger.info(f"scheduler lr = {cur_lr:1.2e}")
past_scheduler_lrs.append(cur_lr)
# train loop
trainvalidate(
model,
stats,
epoch,
train_loader,
optimizer,
False,
visdom_env_root=vis_utils.get_visdom_env(cfg),
device=device,
accelerator=accelerator,
**cfg,
)
# val loop (optional)
if val_loader is not None and epoch % cfg.validation_interval == 0:
trainvalidate(
model,
stats,
epoch,
val_loader,
optimizer,
True,
visdom_env_root=vis_utils.get_visdom_env(cfg),
device=device,
accelerator=accelerator,
**cfg,
)
# eval loop (optional)
if (
dataloaders.test is not None
and cfg.test_interval > 0
and epoch % cfg.test_interval == 0
):
_run_eval(
model,
datasource.all_train_cameras,
dataloaders.test,
task,
camera_difficulty_bin_breaks=cfg.camera_difficulty_bin_breaks,
device=device,
)
assert stats.epoch == epoch, "inconsistent stats!"
# delete previous models if required
# save model only on the main process
if cfg.store_checkpoints and (
accelerator is None or accelerator.is_local_main_process
):
if cfg.store_checkpoints_purge > 0:
for prev_epoch in range(epoch - cfg.store_checkpoints_purge):
model_io.purge_epoch(cfg.exp_dir, prev_epoch)
outfile = model_io.get_checkpoint(cfg.exp_dir, epoch)
unwrapped_model = (
model if accelerator is None else accelerator.unwrap_model(model)
)
model_io.safe_save_model(
unwrapped_model, stats, outfile, optimizer=optimizer
)
scheduler.step()
new_lr = float(scheduler.get_last_lr()[-1])
if new_lr != cur_lr:
logger.info(f"LR change! {cur_lr} -> {new_lr}")
if cfg.test_when_finished:
_eval_and_dump(
cfg,
task,
datasource.all_train_cameras,
datasets,
dataloaders,
model,
stats,
device=device,
)
def _eval_and_dump(
cfg,
task: Task,
all_train_cameras: Optional[CamerasBase],
datasets: DatasetMap,
dataloaders: DataLoaderMap,
model,
stats,
device,
) -> None:
"""
Run the evaluation loop with the test data loader and
save the predictions to the `exp_dir`.
"""
dataloader = dataloaders.test
if dataloader is None:
raise ValueError('DataLoaderMap have to contain the "test" entry for eval!')
results = _run_eval(
model,
all_train_cameras,
dataloader,
task,
camera_difficulty_bin_breaks=cfg.camera_difficulty_bin_breaks,
device=device,
)
# add the evaluation epoch to the results
for r in results:
r["eval_epoch"] = int(stats.epoch)
logger.info("Evaluation results")
evaluate.pretty_print_nvs_metrics(results)
with open(os.path.join(cfg.exp_dir, "results_test.json"), "w") as f:
json.dump(results, f)
def _get_eval_frame_data(frame_data):
"""
Masks the unknown image data to make sure we cannot use it at model evaluation time.
"""
frame_data_for_eval = copy.deepcopy(frame_data)
is_known = ds_utils.is_known_frame(frame_data.frame_type).type_as(
frame_data.image_rgb
)[:, None, None, None]
for k in ("image_rgb", "depth_map", "fg_probability", "mask_crop"):
value_masked = getattr(frame_data_for_eval, k).clone() * is_known
setattr(frame_data_for_eval, k, value_masked)
return frame_data_for_eval
def _run_eval(
model,
all_train_cameras,
loader,
task: Task,
camera_difficulty_bin_breaks: Tuple[float, float],
device,
):
"""
Run the evaluation loop on the test dataloader
"""
lpips_model = lpips.LPIPS(net="vgg")
lpips_model = lpips_model.to(device)
model.eval()
per_batch_eval_results = []
logger.info("Evaluating model ...")
for frame_data in tqdm.tqdm(loader):
frame_data = frame_data.to(device)
# mask out the unknown images so that the model does not see them
frame_data_for_eval = _get_eval_frame_data(frame_data)
with torch.no_grad():
preds = model(
**{**frame_data_for_eval, "evaluation_mode": EvaluationMode.EVALUATION}
)
# TODO: Cannot use accelerate gather for two reasons:.
# (1) TypeError: Can't apply _gpu_gather_one on object of type
# <class 'pytorch3d.implicitron.models.base_model.ImplicitronRender'>,
# only of nested list/tuple/dicts of objects that satisfy is_torch_tensor.
# (2) Same error above but for frame_data which contains Cameras.
implicitron_render = copy.deepcopy(preds["implicitron_render"])
per_batch_eval_results.append(
evaluate.eval_batch(
frame_data,
implicitron_render,
bg_color="black",
lpips_model=lpips_model,
source_cameras=all_train_cameras,
)
)
_, category_result = evaluate.summarize_nvs_eval_results(
per_batch_eval_results, task, camera_difficulty_bin_breaks
)
return category_result["results"]
def _seed_all_random_engines(seed: int) -> None:
np.random.seed(seed)
torch.manual_seed(seed)
random.seed(seed)
def _setup_envvars_for_cluster() -> bool:
"""
Prepares to run on cluster if relevant.
Returns whether FAIR cluster in use.
"""
# TODO: How much of this is needed in general?
try:
import submitit
except ImportError:
return False
try:
# Only needed when launching on cluster with slurm and submitit
job_env = submitit.JobEnvironment()
except RuntimeError:
return False
os.environ["LOCAL_RANK"] = str(job_env.local_rank)
os.environ["RANK"] = str(job_env.global_rank)
os.environ["WORLD_SIZE"] = str(job_env.num_tasks)
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "42918"
logger.info(
"Num tasks %s, global_rank %s"
% (str(job_env.num_tasks), str(job_env.global_rank))
)
return True
expand_args_fields(ExperimentConfig)
cs = hydra.core.config_store.ConfigStore.instance()
cs.store(name="default_config", node=ExperimentConfig)
@hydra.main(config_path="./configs/", config_name="default_config")
def experiment(cfg: DictConfig) -> None:
# CUDA_VISIBLE_DEVICES must have been set.
if "CUDA_DEVICE_ORDER" not in os.environ:
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
if not _setup_envvars_for_cluster():
logger.info("Running locally")
# TODO: The following may be needed for hydra/submitit it to work
expand_args_fields(GenericModel)
expand_args_fields(AdaptiveRaySampler)
expand_args_fields(MultiPassEmissionAbsorptionRenderer)
expand_args_fields(ImplicitronDataSource)
run_training(cfg)
if __name__ == "__main__":
experiment()

View File

@@ -0,0 +1,5 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.

View File

@@ -0,0 +1,49 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
from dataclasses import field
from typing import Any, Dict, Tuple
from omegaconf import DictConfig
from pytorch3d.implicitron.dataset.data_source import ImplicitronDataSource
from pytorch3d.implicitron.models.generic_model import GenericModel
from pytorch3d.implicitron.tools.config import Configurable, get_default_args_field
from .optimization import init_optimizer
class ExperimentConfig(Configurable):
generic_model_args: DictConfig = get_default_args_field(GenericModel)
solver_args: DictConfig = get_default_args_field(init_optimizer)
data_source_args: DictConfig = get_default_args_field(ImplicitronDataSource)
architecture: str = "generic"
detect_anomaly: bool = False
eval_only: bool = False
exp_dir: str = "./data/default_experiment/"
exp_idx: int = 0
gpu_idx: int = 0
metric_print_interval: int = 5
resume: bool = True
resume_epoch: int = -1
seed: int = 0
store_checkpoints: bool = True
store_checkpoints_purge: int = 1
test_interval: int = -1
test_when_finished: bool = False
validation_interval: int = 1
visdom_env: str = ""
visdom_port: int = 8097
visdom_server: str = "http://127.0.0.1"
visualize_interval: int = 1000
clip_grad: float = 0.0
camera_difficulty_bin_breaks: Tuple[float, ...] = 0.97, 0.98
hydra: Dict[str, Any] = field(
default_factory=lambda: {
"run": {"dir": "."}, # Make hydra not change the working dir.
"output_subdir": None, # disable storing the .hydra logs
}
)

View File

@@ -0,0 +1,109 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
import logging
from typing import Any, Dict, Optional, Tuple
import torch
from pytorch3d.implicitron.models.generic_model import GenericModel
from pytorch3d.implicitron.tools.config import enable_get_default_args
logger = logging.getLogger(__name__)
def init_optimizer(
model: GenericModel,
optimizer_state: Optional[Dict[str, Any]],
last_epoch: int,
breed: str = "adam",
weight_decay: float = 0.0,
lr_policy: str = "multistep",
lr: float = 0.0005,
gamma: float = 0.1,
momentum: float = 0.9,
betas: Tuple[float, ...] = (0.9, 0.999),
milestones: Tuple[int, ...] = (),
max_epochs: int = 1000,
):
"""
Initialize the optimizer (optionally from checkpoint state)
and the learning rate scheduler.
Args:
model: The model with optionally loaded weights
optimizer_state: The state dict for the optimizer. If None
it has not been loaded from checkpoint
last_epoch: If the model was loaded from checkpoint this will be the
number of the last epoch that was saved
breed: The type of optimizer to use e.g. adam
weight_decay: The optimizer weight_decay (L2 penalty on model weights)
lr_policy: The policy to use for learning rate. Currently, only "multistep:
is supported.
lr: The value for the initial learning rate
gamma: Multiplicative factor of learning rate decay
momentum: Momentum factor for SGD optimizer
betas: Coefficients used for computing running averages of gradient and its square
in the Adam optimizer
milestones: List of increasing epoch indices at which the learning rate is
modified
max_epochs: The maximum number of epochs to run the optimizer for
Returns:
optimizer: Optimizer module, optionally loaded from checkpoint
scheduler: Learning rate scheduler module
Raise:
ValueError if `breed` or `lr_policy` are not supported.
"""
# Get the parameters to optimize
if hasattr(model, "_get_param_groups"): # use the model function
# pyre-ignore[29]
p_groups = model._get_param_groups(lr, wd=weight_decay)
else:
allprm = [prm for prm in model.parameters() if prm.requires_grad]
p_groups = [{"params": allprm, "lr": lr}]
# Intialize the optimizer
if breed == "sgd":
optimizer = torch.optim.SGD(
p_groups, lr=lr, momentum=momentum, weight_decay=weight_decay
)
elif breed == "adagrad":
optimizer = torch.optim.Adagrad(p_groups, lr=lr, weight_decay=weight_decay)
elif breed == "adam":
optimizer = torch.optim.Adam(
p_groups, lr=lr, betas=betas, weight_decay=weight_decay
)
else:
raise ValueError("no such solver type %s" % breed)
logger.info(" -> solver type = %s" % breed)
# Load state from checkpoint
if optimizer_state is not None:
logger.info(" -> setting loaded optimizer state")
optimizer.load_state_dict(optimizer_state)
# Initialize the learning rate scheduler
if lr_policy == "multistep":
scheduler = torch.optim.lr_scheduler.MultiStepLR(
optimizer,
milestones=milestones,
gamma=gamma,
)
else:
raise ValueError("no such lr policy %s" % lr_policy)
# When loading from checkpoint, this will make sure that the
# lr is correctly set even after returning
for _ in range(last_epoch):
scheduler.step()
optimizer.zero_grad()
return optimizer, scheduler
enable_get_default_args(init_optimizer)

View File

@@ -0,0 +1,5 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.

View File

@@ -0,0 +1,425 @@
generic_model_args:
mask_images: true
mask_depths: true
render_image_width: 400
render_image_height: 400
mask_threshold: 0.5
output_rasterized_mc: false
bg_color:
- 0.0
- 0.0
- 0.0
num_passes: 1
chunk_size_grid: 4096
render_features_dimensions: 3
tqdm_trigger_threshold: 16
n_train_target_views: 1
sampling_mode_training: mask_sample
sampling_mode_evaluation: full_grid
global_encoder_class_type: null
raysampler_class_type: AdaptiveRaySampler
renderer_class_type: MultiPassEmissionAbsorptionRenderer
image_feature_extractor_class_type: null
view_pooler_enabled: false
implicit_function_class_type: NeuralRadianceFieldImplicitFunction
view_metrics_class_type: ViewMetrics
regularization_metrics_class_type: RegularizationMetrics
loss_weights:
loss_rgb_mse: 1.0
loss_prev_stage_rgb_mse: 1.0
loss_mask_bce: 0.0
loss_prev_stage_mask_bce: 0.0
log_vars:
- loss_rgb_psnr_fg
- loss_rgb_psnr
- loss_rgb_mse
- loss_rgb_huber
- loss_depth_abs
- loss_depth_abs_fg
- loss_mask_neg_iou
- loss_mask_bce
- loss_mask_beta_prior
- loss_eikonal
- loss_density_tv
- loss_depth_neg_penalty
- loss_autodecoder_norm
- loss_prev_stage_rgb_mse
- loss_prev_stage_rgb_psnr_fg
- loss_prev_stage_rgb_psnr
- loss_prev_stage_mask_bce
- objective
- epoch
- sec/it
global_encoder_HarmonicTimeEncoder_args:
n_harmonic_functions: 10
append_input: true
time_divisor: 1.0
global_encoder_SequenceAutodecoder_args:
autodecoder_args:
encoding_dim: 0
n_instances: 0
init_scale: 1.0
ignore_input: false
raysampler_AdaptiveRaySampler_args:
image_width: 400
image_height: 400
sampling_mode_training: mask_sample
sampling_mode_evaluation: full_grid
n_pts_per_ray_training: 64
n_pts_per_ray_evaluation: 64
n_rays_per_image_sampled_from_mask: 1024
stratified_point_sampling_training: true
stratified_point_sampling_evaluation: false
scene_extent: 8.0
scene_center:
- 0.0
- 0.0
- 0.0
raysampler_NearFarRaySampler_args:
image_width: 400
image_height: 400
sampling_mode_training: mask_sample
sampling_mode_evaluation: full_grid
n_pts_per_ray_training: 64
n_pts_per_ray_evaluation: 64
n_rays_per_image_sampled_from_mask: 1024
stratified_point_sampling_training: true
stratified_point_sampling_evaluation: false
min_depth: 0.1
max_depth: 8.0
renderer_LSTMRenderer_args:
num_raymarch_steps: 10
init_depth: 17.0
init_depth_noise_std: 0.0005
hidden_size: 16
n_feature_channels: 256
bg_color: null
verbose: false
renderer_MultiPassEmissionAbsorptionRenderer_args:
raymarcher_class_type: EmissionAbsorptionRaymarcher
n_pts_per_ray_fine_training: 64
n_pts_per_ray_fine_evaluation: 64
stratified_sampling_coarse_training: true
stratified_sampling_coarse_evaluation: false
append_coarse_samples_to_fine: true
density_noise_std_train: 0.0
return_weights: false
raymarcher_CumsumRaymarcher_args:
surface_thickness: 1
bg_color:
- 0.0
background_opacity: 0.0
density_relu: true
blend_output: false
raymarcher_EmissionAbsorptionRaymarcher_args:
surface_thickness: 1
bg_color:
- 0.0
background_opacity: 10000000000.0
density_relu: true
blend_output: false
renderer_SignedDistanceFunctionRenderer_args:
render_features_dimensions: 3
ray_tracer_args:
object_bounding_sphere: 1.0
sdf_threshold: 5.0e-05
line_search_step: 0.5
line_step_iters: 1
sphere_tracing_iters: 10
n_steps: 100
n_secant_steps: 8
ray_normal_coloring_network_args:
feature_vector_size: 3
mode: idr
d_in: 9
d_out: 3
dims:
- 512
- 512
- 512
- 512
weight_norm: true
n_harmonic_functions_dir: 0
pooled_feature_dim: 0
bg_color:
- 0.0
soft_mask_alpha: 50.0
image_feature_extractor_ResNetFeatureExtractor_args:
name: resnet34
pretrained: true
stages:
- 1
- 2
- 3
- 4
normalize_image: true
image_rescale: 0.16
first_max_pool: true
proj_dim: 32
l2_norm: true
add_masks: true
add_images: true
global_average_pool: false
feature_rescale: 1.0
view_pooler_args:
feature_aggregator_class_type: AngleWeightedReductionFeatureAggregator
view_sampler_args:
masked_sampling: false
sampling_mode: bilinear
feature_aggregator_AngleWeightedIdentityFeatureAggregator_args:
exclude_target_view: true
exclude_target_view_mask_features: true
concatenate_output: true
weight_by_ray_angle_gamma: 1.0
min_ray_angle_weight: 0.1
feature_aggregator_AngleWeightedReductionFeatureAggregator_args:
exclude_target_view: true
exclude_target_view_mask_features: true
concatenate_output: true
reduction_functions:
- AVG
- STD
weight_by_ray_angle_gamma: 1.0
min_ray_angle_weight: 0.1
feature_aggregator_IdentityFeatureAggregator_args:
exclude_target_view: true
exclude_target_view_mask_features: true
concatenate_output: true
feature_aggregator_ReductionFeatureAggregator_args:
exclude_target_view: true
exclude_target_view_mask_features: true
concatenate_output: true
reduction_functions:
- AVG
- STD
implicit_function_IdrFeatureField_args:
feature_vector_size: 3
d_in: 3
d_out: 1
dims:
- 512
- 512
- 512
- 512
- 512
- 512
- 512
- 512
geometric_init: true
bias: 1.0
skip_in: []
weight_norm: true
n_harmonic_functions_xyz: 0
pooled_feature_dim: 0
encoding_dim: 0
implicit_function_NeRFormerImplicitFunction_args:
n_harmonic_functions_xyz: 10
n_harmonic_functions_dir: 4
n_hidden_neurons_dir: 128
latent_dim: 0
input_xyz: true
xyz_ray_dir_in_camera_coords: false
color_dim: 3
transformer_dim_down_factor: 2.0
n_hidden_neurons_xyz: 80
n_layers_xyz: 2
append_xyz:
- 1
implicit_function_NeuralRadianceFieldImplicitFunction_args:
n_harmonic_functions_xyz: 10
n_harmonic_functions_dir: 4
n_hidden_neurons_dir: 128
latent_dim: 0
input_xyz: true
xyz_ray_dir_in_camera_coords: false
color_dim: 3
transformer_dim_down_factor: 1.0
n_hidden_neurons_xyz: 256
n_layers_xyz: 8
append_xyz:
- 5
implicit_function_SRNHyperNetImplicitFunction_args:
hypernet_args:
n_harmonic_functions: 3
n_hidden_units: 256
n_layers: 2
n_hidden_units_hypernet: 256
n_layers_hypernet: 1
in_features: 3
out_features: 256
latent_dim_hypernet: 0
latent_dim: 0
xyz_in_camera_coords: false
pixel_generator_args:
n_harmonic_functions: 4
n_hidden_units: 256
n_hidden_units_color: 128
n_layers: 2
in_features: 256
out_features: 3
ray_dir_in_camera_coords: false
implicit_function_SRNImplicitFunction_args:
raymarch_function_args:
n_harmonic_functions: 3
n_hidden_units: 256
n_layers: 2
in_features: 3
out_features: 256
latent_dim: 0
xyz_in_camera_coords: false
raymarch_function: null
pixel_generator_args:
n_harmonic_functions: 4
n_hidden_units: 256
n_hidden_units_color: 128
n_layers: 2
in_features: 256
out_features: 3
ray_dir_in_camera_coords: false
view_metrics_ViewMetrics_args: {}
regularization_metrics_RegularizationMetrics_args: {}
solver_args:
breed: adam
weight_decay: 0.0
lr_policy: multistep
lr: 0.0005
gamma: 0.1
momentum: 0.9
betas:
- 0.9
- 0.999
milestones: []
max_epochs: 1000
data_source_args:
dataset_map_provider_class_type: ???
data_loader_map_provider_class_type: SequenceDataLoaderMapProvider
dataset_map_provider_BlenderDatasetMapProvider_args:
base_dir: ???
object_name: ???
path_manager_factory_class_type: PathManagerFactory
n_known_frames_for_test: null
path_manager_factory_PathManagerFactory_args:
silence_logs: true
dataset_map_provider_JsonIndexDatasetMapProvider_args:
category: ???
task_str: singlesequence
dataset_root: ''
n_frames_per_sequence: -1
test_on_train: false
restrict_sequence_name: []
test_restrict_sequence_id: -1
assert_single_seq: false
only_test_set: false
dataset_class_type: JsonIndexDataset
path_manager_factory_class_type: PathManagerFactory
dataset_JsonIndexDataset_args:
limit_to: 0
limit_sequences_to: 0
exclude_sequence: []
limit_category_to: []
load_images: true
load_depths: true
load_depth_masks: true
load_masks: true
load_point_clouds: false
max_points: 0
mask_images: false
mask_depths: false
image_height: 800
image_width: 800
box_crop: true
box_crop_mask_thr: 0.4
box_crop_context: 0.3
remove_empty_masks: true
seed: 0
sort_frames: false
path_manager_factory_PathManagerFactory_args:
silence_logs: true
dataset_map_provider_JsonIndexDatasetMapProviderV2_args:
category: ???
subset_name: ???
dataset_root: ''
test_on_train: false
only_test_set: false
load_eval_batches: true
dataset_class_type: JsonIndexDataset
path_manager_factory_class_type: PathManagerFactory
dataset_JsonIndexDataset_args:
path_manager: null
frame_annotations_file: ''
sequence_annotations_file: ''
subset_lists_file: ''
subsets: null
limit_to: 0
limit_sequences_to: 0
pick_sequence: []
exclude_sequence: []
limit_category_to: []
dataset_root: ''
load_images: true
load_depths: true
load_depth_masks: true
load_masks: true
load_point_clouds: false
max_points: 0
mask_images: false
mask_depths: false
image_height: 800
image_width: 800
box_crop: true
box_crop_mask_thr: 0.4
box_crop_context: 0.3
remove_empty_masks: true
n_frames_per_sequence: -1
seed: 0
sort_frames: false
eval_batches: null
path_manager_factory_PathManagerFactory_args:
silence_logs: true
dataset_map_provider_LlffDatasetMapProvider_args:
base_dir: ???
object_name: ???
path_manager_factory_class_type: PathManagerFactory
n_known_frames_for_test: null
path_manager_factory_PathManagerFactory_args:
silence_logs: true
data_loader_map_provider_SequenceDataLoaderMapProvider_args:
batch_size: 1
num_workers: 0
dataset_length_train: 0
dataset_length_val: 0
dataset_length_test: 0
train_conditioning_type: SAME
val_conditioning_type: SAME
test_conditioning_type: KNOWN
images_per_seq_options: []
sample_consecutive_frames: false
consecutive_frames_max_gap: 0
consecutive_frames_max_gap_seconds: 0.1
architecture: generic
detect_anomaly: false
eval_only: false
exp_dir: ./data/default_experiment/
exp_idx: 0
gpu_idx: 0
metric_print_interval: 5
resume: true
resume_epoch: -1
seed: 0
store_checkpoints: true
store_checkpoints_purge: 1
test_interval: -1
test_when_finished: false
validation_interval: 1
visdom_env: ''
visdom_port: 8097
visdom_server: http://127.0.0.1
visualize_interval: 1000
clip_grad: 0.0
camera_difficulty_bin_breaks:
- 0.97
- 0.98
hydra:
run:
dir: .
output_subdir: null

View File

@@ -0,0 +1,91 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
import os
import unittest
from pathlib import Path
from hydra import compose, initialize_config_dir
from omegaconf import OmegaConf
from .. import experiment
def interactive_testing_requested() -> bool:
"""
Certain tests are only useful when run interactively, and so are not regularly run.
These are activated by this funciton returning True, which the user requests by
setting the environment variable `PYTORCH3D_INTERACTIVE_TESTING` to 1.
"""
return os.environ.get("PYTORCH3D_INTERACTIVE_TESTING", "") == "1"
internal = os.environ.get("FB_TEST", False)
DATA_DIR = Path(__file__).resolve().parent
IMPLICITRON_CONFIGS_DIR = Path(__file__).resolve().parent.parent / "configs"
DEBUG: bool = False
# TODO:
# - add enough files to skateboard_first_5 that this works on RE.
# - share common code with PyTorch3D tests?
# - deal with the temporary output files this test creates
class TestExperiment(unittest.TestCase):
def setUp(self):
self.maxDiff = None
def test_from_defaults(self):
# Test making minimal changes to the dataclass defaults.
if not interactive_testing_requested() or not internal:
return
cfg = OmegaConf.structured(experiment.ExperimentConfig)
cfg.data_source_args.dataset_map_provider_class_type = (
"JsonIndexDatasetMapProvider"
)
dataset_args = (
cfg.data_source_args.dataset_map_provider_JsonIndexDatasetMapProvider_args
)
dataloader_args = (
cfg.data_source_args.data_loader_map_provider_SequenceDataLoaderMapProvider_args
)
dataset_args.category = "skateboard"
dataset_args.test_restrict_sequence_id = 0
dataset_args.dataset_root = "manifold://co3d/tree/extracted"
dataset_args.dataset_JsonIndexDataset_args.limit_sequences_to = 5
dataset_args.dataset_JsonIndexDataset_args.image_height = 80
dataset_args.dataset_JsonIndexDataset_args.image_width = 80
dataloader_args.dataset_length_train = 1
dataloader_args.dataset_length_val = 1
cfg.solver_args.max_epochs = 2
experiment.run_training(cfg)
def test_yaml_contents(self):
cfg = OmegaConf.structured(experiment.ExperimentConfig)
yaml = OmegaConf.to_yaml(cfg, sort_keys=False)
if DEBUG:
(DATA_DIR / "experiment.yaml").write_text(yaml)
self.assertEqual(yaml, (DATA_DIR / "experiment.yaml").read_text())
def test_load_configs(self):
config_files = []
for pattern in ("repro_singleseq*.yaml", "repro_multiseq*.yaml"):
config_files.extend(
[
f
for f in IMPLICITRON_CONFIGS_DIR.glob(pattern)
if not f.name.endswith("_base.yaml")
]
)
for file in config_files:
with self.subTest(file.name):
with initialize_config_dir(config_dir=str(IMPLICITRON_CONFIGS_DIR)):
compose(file.name)

Some files were not shown because too many files have changed in this diff Show More