Summary:
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1394
The bash logic for building conda packages became fiddly to edit. We need to switch cuda-toolkit to pytorch-cuda when PyTorch>=1.12 which was going to be a pain, so here I rewrite the code in python and do it.
Reviewed By: shapovalov
Differential Revision: D42036406
fbshipit-source-id: 8bb80c2f7545477182b23fc97c8514dcafcee176
Summary: Make GLB files report their own length correctly. They were off by 28.
Reviewed By: davidsonic
Differential Revision: D41838340
fbshipit-source-id: 9cd66e8337c142298d5ae1d7c27e51fd812d5c7b
Summary: Python 3.7 not needed any more
Reviewed By: shapovalov
Differential Revision: D41841033
fbshipit-source-id: c0cfd048c70e6b9e47224ab8cddcd6b5f4fc5597
Summary: All mac builds now pytorch 1.13
Reviewed By: shapovalov
Differential Revision: D41841035
fbshipit-source-id: b932eb2fefed77ae22f9757f9bd628ce12b11fad
Summary: Write the amalgamated mesh from the Mesh module to glb. In this version, the json header and the binary data specified by the buffer are merged into glb. The image texture attributes are added.
Reviewed By: bottler
Differential Revision: D41489778
fbshipit-source-id: 3af0e9a8f9e9098e73737a254177802e0fb6bd3c
Summary: Fixes a bug which would crash render_flyaround anytime visualize_preds_keys is adjusted
Reviewed By: shapovalov
Differential Revision: D41124462
fbshipit-source-id: 127045a91a055909f8bd56c8af81afac02c00f60
Summary:
Addresses the following issue:
https://github.com/facebookresearch/pytorch3d/issues/1345#issuecomment-1272881244
I.e., when installed from conda, `pytorch3d_implicitron_visualizer` crashes since it invokes `main()` while `main` requires a single positional arg `argv`.
Reviewed By: shapovalov
Differential Revision: D41533497
fbshipit-source-id: e53a923eb8b2f0f9c0e92e9c0866d9cb310c4799
Summary: To be consistent with CUDA hashing, the diff replaces boost hasher with a simplified hasher for storing unique global edge_ids.
Reviewed By: kjchalup
Differential Revision: D41140382
fbshipit-source-id: 2ce598e5edcf6369fe13bd15d1f5e014b252027b
Summary: Autogenerate docs for the renderer too. This will be helpful but make a slightly ugly TOC
Reviewed By: kjchalup
Differential Revision: D40977315
fbshipit-source-id: 10831de3ced68080cb5671c5dc31d4da8500f761
Summary:
Every time I try to run code, I get this warning:
```
warnings.warn("Can't import pucuda.gl, not importing MeshRasterizerOpenGL.")
```
Of course, `pucuda` is a typo of `pycuda`.
This PR fixes the typo
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1379
Reviewed By: kjchalup
Differential Revision: D41295562
Pulled By: bottler
fbshipit-source-id: 2bfa2a2dbe20a5347861d36fbff5094994c1253d
Summary:
Enum fields cause the following to crash since they are loaded as strings:
```
config = OmegaConf.load(autodumped_cfg_file)
Experiment(**config)
```
It would be good to come up with the general solution but for now just fixing the visualisation script.
Reviewed By: bottler
Differential Revision: D41140426
fbshipit-source-id: 71c1c6b1fffe3b5ab1ca0114cfa3f0d81160278f
Summary:
Rasterize MC was not adapted to heterogeneous bundles.
There are some caveats though:
1) on CO3D, we get up to 18 points per image, which is too few for a reasonable visualisation (see below);
2) rasterising for a batch of 100 is slow.
I also moved the unpacking code close to the bundle to be able to reuse it.
{F789678778}
Reviewed By: bottler, davnov134
Differential Revision: D41008600
fbshipit-source-id: 9f10f1f9f9a174cf8c534b9b9859587d69832b71
Summary:
Allow a module's param_group member to specify overrides to the param groups of its members or their members.
Also logging for param group assignments.
This allows defining `params.basis_matrix` in the param_groups of a voxel_grid.
Reviewed By: shapovalov
Differential Revision: D41080667
fbshipit-source-id: 49f3b0e5b36e496f78701db0699cbb8a7e20c51e
Summary: Fix indexing of directions after filtering of points by scaffold.
Reviewed By: shapovalov
Differential Revision: D40853482
fbshipit-source-id: 9cfdb981e97cb82edcd27632c5848537ed2c6837
Summary:
Allows loading of multiple categories.
Multiple categories are provided in a comma-separated list of category names.
Reviewed By: bottler, shapovalov
Differential Revision: D40803297
fbshipit-source-id: 863938be3aa6ffefe9e563aede4a2e9e66aeeaa8
Summary: Try to document implicitron. Most of this is autogenerated.
Reviewed By: shapovalov
Differential Revision: D40623742
fbshipit-source-id: 453508277903b7d987b1703656ba1ee09bc2c570
Summary: The bug lead to non-coinciding origins of the rays emitted from perspective cameras when unit_directions=True
Reviewed By: bottler
Differential Revision: D40865610
fbshipit-source-id: 398598e9e919b53e6bea179f0400e735bbb5b625
Summary: Some things fail if a parameter is not wraped; in particular, it prevented other tensors moving to GPU.
Reviewed By: bottler
Differential Revision: D40819932
fbshipit-source-id: a23b38ceacd7f0dc131cb0355fef1178e3e2f7fd
Summary: Add option to flat pad the last delta. Might to help when training on rgb only.
Reviewed By: shapovalov
Differential Revision: D40587475
fbshipit-source-id: c763fa38948600ea532c730538dc4ff29d2c3e0a
Summary: Make Implicitron run without visdom installed.
Reviewed By: shapovalov
Differential Revision: D40587974
fbshipit-source-id: dc319596c7a4d10a4c54c556dabc89ad9d25c2fb
Summary:
According to the profiler trace D40326775, _check_valid_rotation_matrix is slow because of aten::all_close operation and _safe_det_3x3 bottlenecks. Disable the check by default unless environment variable PYTORCH3D_CHECK_ROTATION_MATRICES is set to 1.
Comparison after applying the change:
```
Profiling/Function get_world_to_view (ms) Transform_points(ms) specular(ms)
before 12.751 18.577 21.384
after 4.432 (34.7%) 9.248 (49.8%) 11.507 (53.8%)
```
Profiling trace:
https://pxl.cl/2h687
More details in https://docs.google.com/document/d/1kfhEQfpeQToikr5OH9ZssM39CskxWoJ2p8DO5-t6eWk/edit?usp=sharing
Reviewed By: kjchalup
Differential Revision: D40442503
fbshipit-source-id: 954b58de47de235c9d93af441643c22868b547d0
Summary: Keep the cause of hydra errors visible in some more cases.
Reviewed By: shapovalov
Differential Revision: D40516202
fbshipit-source-id: 8d214be5cc808a37738add77cc305fe099788546
Summary:
Adds the ability to have different learning rates for different parts of the model. The trainable parts of the implicitron have a new member
param_groups: dictionary where keys are names of individual parameters,
or module’s members and values are the parameter group where the
parameter/member will be sorted to. "self" key is used to denote the
parameter group at the module level. Possible keys, including the "self" key
do not have to be defined. By default all parameters are put into "default"
parameter group and have the learning rate defined in the optimizer,
it can be overriden at the:
- module level with “self” key, all the parameters and child
module s parameters will be put to that parameter group
- member level, which is the same as if the `param_groups` in that
member has key=“self” and value equal to that parameter group.
This is useful if members do not have `param_groups`, for
example torch.nn.Linear.
- parameter level, parameter with the same name as the key
will be put to that parameter group.
And in the optimizer factory, parameters and their learning rates are recursively gathered.
Reviewed By: shapovalov
Differential Revision: D40145802
fbshipit-source-id: 631c02b8d79ee1c0eb4c31e6e42dbd3d2882078a
Summary:
Added initialization configuration for the last layer of the MLP decoding function. You can now set:
- last activation function (tensorf uses sigmoid)
- last bias init (tensorf uses 0, because of sigmoid ofc)
- option to use xavier initialization (we use relu so this should not be set)
Reviewed By: davnov134
Differential Revision: D40304981
fbshipit-source-id: ec398eb2235164ae85cb7c09b9660e843490ea04
Summary:
Small config system fix. Allows get_default_args to work on an instance which has been created with a dict (instead of a DictConfig) as an args field. E.g.
```
gm = GenericModel(
raysampler_AdaptiveRaySampler_args={"scene_extent": 4.0}
)
OmegaConf.structured(gm1)
```
Reviewed By: shapovalov
Differential Revision: D40341047
fbshipit-source-id: 587d0e8262e271df442a80858949a48e5d6db3df
Summary: Tensorf does relu or softmax after the density grid. This diff adds the ability to replicate that.
Reviewed By: bottler
Differential Revision: D40023228
fbshipit-source-id: 9f19868cd68460af98ab6e61c7f708158c26dc08
Summary: More helpful errors when the output channels aren't 1 for density and 3 for color
Reviewed By: shapovalov
Differential Revision: D40341088
fbshipit-source-id: 6074bf7fefe11c8e60fee4db2760b776419bcfee
Summary: Couldn't build p3d on devfair because C++17 is unsupported. Two structured bindings sneaked in.
Reviewed By: bottler
Differential Revision: D40280967
fbshipit-source-id: 9627f3f9f76247a6cefbeac067fdead67c6f4e14
Summary:
TensoRF at step 2000 does volume croping and resizing.
At those steps it calculates part of the voxel grid which has density big enough to have objects and resizes the grid to fit that object.
Change is done on 3 levels:
- implicit function subscribes to epochs and at specific epochs finds the bounding box of the object and calls resizing of the color and density voxel grids to fit it
- VoxelGrid module calls cropping of the underlaying voxel grid and resizing to fit previous size it also adjusts its extends and translation to match wanted size
- Each voxel grid has its own way of cropping the underlaying data
Reviewed By: kjchalup
Differential Revision: D39854548
fbshipit-source-id: 5435b6e599aef1eaab980f5421d3369ee4829c50
Summary:
avoid creating a numpy array of random things just to split it: this can now generate a warning e.g. if the list contains lists of varying lengths. There might also be a performance win here, and we could do more of the same if we care about that.
(The vanilla way to avoid the new warning is to replace `np.split(a,` with `np.split(np.array(a, dtype=object), ` btw.)
Reviewed By: shapovalov
Differential Revision: D40209308
fbshipit-source-id: daae33a23ceb444e8e7241f72ce1525593e2f239
Summary: Forward method is sped up using the scaffold, a low resolution voxel grid which is used to filter out the points in empty space. These points will be predicted as having 0 density and (0, 0, 0) color. The points which were not evaluated as empty space will be passed through the steps outlined above.
Reviewed By: kjchalup
Differential Revision: D39579671
fbshipit-source-id: 8eab8bb43ef77c2a73557efdb725e99a6c60d415
Summary: Avoids use of `torch.cat` operation when rendering a volume by instead issuing multiple calls to `torch.nn.functional.grid_sample`. Density and color tensors can be large.
Reviewed By: bottler
Differential Revision: D40072399
fbshipit-source-id: eb4cd34f6171d54972bbf2877065f973db497de0
Summary:
Torch C++ extension for Marching Cubes
- Add torch C++ extension for marching cubes. Observe a speed up of ~255x-324x speed up (over varying batch sizes and spatial resolutions)
- Add C++ impl in existing unit-tests.
(Note: this ignores all push blocking failures!)
Reviewed By: kjchalup
Differential Revision: D39590638
fbshipit-source-id: e44d2852a24c2c398e5ea9db20f0dfaa1817e457
Summary: Overhaul of marching_cubes_naive for better performance and to avoid relying on unstable hashing. In particular, instead of hashing vertex positions, we index each interpolated vertex with its corresponding edge in the 3d grid.
Reviewed By: kjchalup
Differential Revision: D39419642
fbshipit-source-id: b5fede3525c545d1d374198928dfb216262f0ec0
Summary:
Threaded the for loop:
```
for (int yi = 0; yi < H; ++yi) {...}
```
in function `RasterizeMeshesNaiveCpu()`.
Chunk size is approx equal.
Reviewed By: bottler
Differential Revision: D40063604
fbshipit-source-id: 09150269405538119b0f1b029892179501421e68
Summary: Loads the whole dataset and moves it to the device and sends it to for sampling to enable full dataset heterogeneous raysampling.
Reviewed By: bottler
Differential Revision: D39263009
fbshipit-source-id: c527537dfc5f50116849656c9e171e868f6845b1
Summary:
Changed ray_sampler and metrics to be able to use mixed frame raysampling.
Ray_sampler now has a new member which it passes to the pytorch3d raysampler.
If the raybundle is heterogeneous metrics now samples images by padding xys first. This reduces memory consumption.
Reviewed By: bottler, kjchalup
Differential Revision: D39542221
fbshipit-source-id: a6fec23838d3049ae5c2fd2e1f641c46c7c927e3
Summary: new implicitronRayBundle with added cameraIDs and camera counts. Added to enable a single raybundle inside Implicitron and easier extension in the future. Since RayBundle is named tuple and RayBundleHeterogeneous is dataclass and RayBundleHeterogeneous cannot inherit RayBundle. So if there was no ImplicitronRayBundle every function that uses RayBundle now would have to use Union[RayBundle, RaybundleHeterogeneous] which is confusing and unecessary complicated.
Reviewed By: bottler, kjchalup
Differential Revision: D39262999
fbshipit-source-id: ece160e32f6c88c3977e408e966789bf8307af59
Summary:
Added heterogeneous raysampling to pytorch3d raysampler, different cameras are sampled different number of times.
It now returns RayBundle if heterogeneous raysampling is off and new RayBundleHeterogeneous (with added fields `camera_ids` and `camera_counts`). Heterogeneous raysampling is on if `n_rays_total` is not None.
Reviewed By: bottler
Differential Revision: D39542222
fbshipit-source-id: d3d88d822ec7696e856007c088dc36a1cfa8c625
Summary:
This is quite a thin wrapper – not sure we need it. The motivation is that `Transform3d` is not as matrix-centric now, it can be converted to SE(3) logarithm equally easily.
It simplifies things like averaging cameras and getting axis-angle of camera rotation (previously, one would need to call `se3_log_map(cameras.get_world_to_camera_transform().get_matrix())`), now one fewer thing to call / discover.
Reviewed By: bottler
Differential Revision: D39928000
fbshipit-source-id: 85248d5b8af136618f1d08791af5297ea5179d19
Summary:
`get_rotation_to_best_fit_xy` is useful to expose externally, however there was a bug (which we probably did not care about for our use case): it could return a rotation matrix with det(R) == −1.
The diff fixes that, and also makes centroid optional (it can be computed from points).
Reviewed By: bottler
Differential Revision: D39926791
fbshipit-source-id: 5120c7892815b829f3ddcc23e93d4a5ec0ca0013
Summary: Any module can be subscribed to step updates from the training loop. Once the training loop publishes a step the voxel grid changes its dimensions. During the construction of VoxelGridModule and its parameters it does not know which is the resolution that will be loaded from checkpoint, so before the checkpoint loading a hook runs which changes the VoxelGridModule's parameters to match shapes of the loaded checkpoint.
Reviewed By: bottler
Differential Revision: D39026775
fbshipit-source-id: 0d359ea5c8d2eda11d773d79c7513c83585d5f17
Summary:
User reported that cloned cameras fail to save. The error with latest PyTorch is
```
pickle.PicklingError: Can't pickle ~T_destination: attribute lookup T_destination on torch.nn.modules.module failed
```
This fixes it.
Reviewed By: btgraham
Differential Revision: D39692258
fbshipit-source-id: 75bbf3b8dfa0023dc28bf7d4cc253ca96e46a64d
Summary:
We need to make packing/unpacking in 2 places for mixed frame raysampling (metrics and raysampler) but those tensors that need to be unpacked/packed have more than two dimensions.
I could have reshaped and stored dimensions but this seems to just complicate code there with something which packed_to_padded should support.
I could have made a separate function for implicitron but it would confusing to have two different padded_to_packed functions inside pytorch3d codebase one of which does packing for (b, max) and (b, max, f) and the other for (b, max, …)
Reviewed By: bottler
Differential Revision: D39729026
fbshipit-source-id: 2bdebf290dcc6c316b7fe1aeee49bbb5255e508c
Summary: The implicit function and its members and internal working
Reviewed By: kjchalup
Differential Revision: D38829764
fbshipit-source-id: 28394fe7819e311ed52c9defc9a1b29f37fbc495
Summary: Allow using the new `foreach` option on optimizers.
Reviewed By: shapovalov
Differential Revision: D39694843
fbshipit-source-id: 97109c245b669bc6edff0f246893f95b7ae71f90
Summary: Add the ability to process arbitrary point shapes `[n_grids, ..., 3]` instead of only `[n_grids, n_points, 3]`.
Reviewed By: bottler
Differential Revision: D39574373
fbshipit-source-id: 0a9ecafe9ea58cd8f909644de43a1185ecf934f4
Summary:
Added export of UV textures to IO.save_mesh in Pytorch3d
MeshObjFormat now passes verts_uv, faces_uv, and texture_map as input to save_obj
TODO: check if TexturesUV.verts_uv_list or TexturesUV.verts_uv_padded() should be passed to save_obj
IO.save_mesh(obj_file, meshes, decimal_places=2) should be IO().save_mesh(obj_file, meshes, decimal_places=2)
Reviewed By: bottler
Differential Revision: D39617441
fbshipit-source-id: 4628b7f26f70e38c65f235852b990c8edb0ded23
Summary:
A significant speedup (e.g. >2% of a forward pass).
Move NDCMultinomialRaysampler parts of AbstractMaskRaySampler to members instead of living in a dict. The dict was hiding them from the nn.Module system so their _xy_grid members were remaining on the CPU. Therefore they were being copied to the GPU in every forward pass.
(We couldn't easily use a ModuleDict here because the enum keys are not strs.)
Reviewed By: shapovalov
Differential Revision: D39668589
fbshipit-source-id: 719b88e4a08fd7263a284e0ab38189e666bd7e3a
Summary:
- indicate location of OmegaConf.structured failures
- split the data gathering from enable_get_default_args to ease experimenting with it.
- comment fixes.
- nicer error when a_class_type has weird type.
Reviewed By: kjchalup
Differential Revision: D39434447
fbshipit-source-id: b80c7941547ca450e848038ef5be95b7ebbe8f3e
Summary: Various fixes to get visualize_reconstruction running, and an interactive test for it.
Reviewed By: kjchalup
Differential Revision: D39286691
fbshipit-source-id: 88735034cc01736b24735bcb024577e6ab7ed336
Summary: Workaround for oddity with new hydra.
Reviewed By: davnov134
Differential Revision: D39280639
fbshipit-source-id: 76e91947f633589945446db93cf2dbc259642f8a
Summary: Samples batches without replacement if the number of samples is not specified. This makes sure that we always iterate over the whole dataset in each epoch.
Reviewed By: bottler
Differential Revision: D39270786
fbshipit-source-id: 0c983d1f5e0af711463abfb23939bc0d2b5172a0
Summary:
Move the flyaround rendering function into core implicitron.
The unblocks an example in the facebookresearch/co3d repo.
Reviewed By: bottler
Differential Revision: D39257801
fbshipit-source-id: 6841a88a43d4aa364dd86ba83ca2d4c3cf0435a4
Summary:
The self._stratified_sampling attribute is always overridden unless stratified_sampling is explicitly set to None. However, the desired default behavior is that the value of self._stratified_sampling is used unless the argument stratified_sampling is set to True/False. Changing the default to None achieves this
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1324
Reviewed By: bottler
Differential Revision: D39259775
Pulled By: davnov134
fbshipit-source-id: e01bb747ac80c812eb27bf22e67f5e14f29acadd
Summary: On each call of the stats.update the object calculates current average iteration time by getting time elapsed from the time_start and then dividing it by the current number of steps. It saves the result to AverageMeter object which when queried returns the average of things saved, so the time is averaged twice which biases it towards the start value (which is often larger).
Reviewed By: kjchalup
Differential Revision: D39206989
fbshipit-source-id: ccab5233d7aaca1ac4fd626fb329b83c7c0d6af9
Summary: Currently some implicit functions in implicitron take a raybundle, others take ray_points_world. raybundle is what they really need. However, the raybundle is going to become a bit more flexible later, as it will contain different numbers of rays for each camera.
Reviewed By: bottler
Differential Revision: D39173751
fbshipit-source-id: ebc038e426d22e831e67a18ba64655d8a61e1eb9
Summary: Update the docstring for try_get_projection_transform on the API design.
Reviewed By: kjchalup
Differential Revision: D39227333
fbshipit-source-id: c9d0e625735d4972116d1f71865fb9b763e684de
Summary:
1) Update rasterizer/point rasterizer to accommodate fisheyecamera. Specifically, transform_points is in placement of explicit transform compositions.
2) In rasterizer unittests, update corresponding tests for rasterizer and point_rasterizer. Address comments to test fisheye against perspective camera when distortions are turned off.
3) Address comments to add end2end test for fisheyecameras. In test_render_meshes, fisheyecameras are added to camera enuerations whenever possible.
4) Test renderings with fisheyecameras of different params on cow mesh.
5) Use compositions for linear cameras whenever possible.
Reviewed By: kjchalup
Differential Revision: D38932736
fbshipit-source-id: 5b7074fc001f2390f4cf43c7267a8b37fd987547
Summary:
Amend FisheyeCamera by adding tests for all
combination of params and for different batch_sizes.
Reviewed By: kjchalup
Differential Revision: D39176747
fbshipit-source-id: 830d30da24beeb2f0df52db0b17a4303ed53b59c
Summary: A dummy value in test_opengl_utils seems to be able to break tests in test_mesh_renderer_opengl{,_to}.
Reviewed By: kjchalup
Differential Revision: D39173275
fbshipit-source-id: 83b15159f70135ea575d5085c7b6b37badd6e49e
Summary: D38919607 (c4545a7cbc) and D38858887 (06cbba2628) were premature, turns out CUDA 10.2 doesn't support C++17.
Reviewed By: bottler
Differential Revision: D39156205
fbshipit-source-id: 5e2e84cc4a57d1113a915166631651d438540d56
Summary:
Adds yaml configs to train selected methods on CO3Dv2.
Few more updates:
1) moved some fields to base classes so that we can check is_multisequence in experiment.py
2) skip loading all train cameras for multisequence datasets, without this, co3d-fewview is untrainable
3) fix bug in json index dataset provider v2
Reviewed By: kjchalup
Differential Revision: D38952755
fbshipit-source-id: 3edac6fc8e20775aa70400bd73a0e6d52b091e0c
Summary: Address comments to add benchmarkings for cameras and the new fisheye cameras. The dependency functions in test_cameras have been updated in Diff 1. The following two snapshots show benchmarking results.
Reviewed By: kjchalup
Differential Revision: D38991914
fbshipit-source-id: 51fe9bb7237543e4ee112c9f5068a4cf12a9d482
Summary:
1. A Fisheye camera model that generalizes pinhole camera by considering distortions (i.e. radial, tangential and thin-prism distortions).
2. Added tests against perspective cameras when distortions are off and Aria data points when distortions are on.
3. Address comments to test unhandled shapes between points and transforms. Added tests for __FIELDS, shape broadcasts, cuda etc.
4. Address earlier comments for code efficiency (e.g., adopted torch.norm; torch.solve for matrix inverse; removed inplace operations; unnecessary clone; expand in place of repeat etc).
Reviewed By: jcjohnson
Differential Revision: D38407094
fbshipit-source-id: a3ab48c85c496ac87af692d5d461bb3fc2a2db13
Summary: I think there is a typo here could not find any MultiPassEARenderer just MultiPassEmissionAbsorptionRenderer?
Reviewed By: bottler
Differential Revision: D39056641
fbshipit-source-id: 4dd0b123fc795a0083a957786c032e23dc5abac9
Summary: Added replacable decoding functions which will be applied after the voxel grid to get color and density
Reviewed By: bottler
Differential Revision: D38829763
fbshipit-source-id: f21ce206c1c19548206ea2ce97d7ebea3de30a23
Summary: Simple wrapper around voxel grids to make them a module
Reviewed By: bottler
Differential Revision: D38829762
fbshipit-source-id: dfee85088fa3c65e396cc7d3bf7ebaaffaadb646
Summary:
One of the docstrings is a disaster see https://pytorch3d.readthedocs.io/en/latest/modules/ops.html
Also some minor fixes I encountered when browsing the code
Reviewed By: bottler
Differential Revision: D38581595
fbshipit-source-id: 3b6ca97788af380a44df9144a6a4cac782c6eab8
Summary: Moved the MLP and transformer from nerf to a new file to be reused.
Reviewed By: bottler
Differential Revision: D38828150
fbshipit-source-id: 8ff77b18b3aeeda398d90758a7bcb2482edce66f
Summary: Added voxel grid classes from TensoRF, both in their factorized (CP and VM) and full form.
Reviewed By: bottler
Differential Revision: D38465419
fbshipit-source-id: 8b306338af58dc50ef47a682616022a0512c0047
Summary: Fix EPS issue that causes numerical instabilities when boxes are very close
Reviewed By: kjchalup
Differential Revision: D38661465
fbshipit-source-id: d2b6753cba9dc2f0072ace5289c9aa815a1a29f6
Summary: Removing hardcoded block reduction operation from `sample_farthest_points.cu` code, and replace it with `cub::BlockReduce` reducing complexity of the code, and letting established libraries do the thinking for us.
Reviewed By: bottler
Differential Revision: D38617147
fbshipit-source-id: b230029c55f05cda0aab1648d3105a8d3e92d27b
Summary: Split Volumes class to data and location part so that location part can be reused in planned VoxelGrid classes.
Reviewed By: bottler
Differential Revision: D38782015
fbshipit-source-id: 489da09c5c236f3b81961ce9b09edbd97afaa7c8
Summary:
generic_model_args no longer exists. Update some references to it, mostly in doc.
This fixes the testing of all the yaml files in test_forward pass.
Reviewed By: shapovalov
Differential Revision: D38789202
fbshipit-source-id: f11417efe772d7f86368b3598aa66c52b1309dbf
Summary:
We identified that these logging statements can deteriorate performance in certain cases. I propose removing them from the regular renderer implementation and letting individuals re-insert debug logging wherever needed on a case-by-case basis.
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1260
Reviewed By: kjchalup
Differential Revision: D38737439
Pulled By: bottler
fbshipit-source-id: cf9dcbbeae4dbf214c2e17d5bafa00b2ff796393
Summary: Useful for visualising colmap output where some frames are not correctly registered.
Reviewed By: bottler
Differential Revision: D38743191
fbshipit-source-id: e823df2997870dc41d76784e112d4349f904d311
Summary: Previously, "psnr" was evaluated between the masked g.t. image and the render. To avoid confusion, "psnr" is now renamed to "psnr_masked".
Reviewed By: bottler
Differential Revision: D38707511
fbshipit-source-id: 8ee881ab1a05453d6692dde9782333a47d8c1234
Summary: Builds for new PyTorch 1.12.1. Drop builds for PyTorch 1.8.0 and 1.8.1.
Reviewed By: kjchalup
Differential Revision: D38658991
fbshipit-source-id: 6192e226c2154cd051eeee98498d9a395cfd6fd5
Summary: Reports also the PSNR between the unmasked G.T. image and the render.
Reviewed By: bottler
Differential Revision: D38655943
fbshipit-source-id: 1603a2d02116ea1ce037e5530abe1afc65a2ba93
Summary:
**"filename"**: "projects/nerf/nerf/implicit_function.py"
**"warning_type"**: "Incompatible variable type [9]",
**"warning_message"**: " input_skips is declared to have type `Tuple[int]` but is used as type `Tuple[]`.",
**"warning_line"**: 256,
**"fix"**: input_skips: Tuple[int,...] = ()
Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1288
Reviewed By: kjchalup
Differential Revision: D38615188
Pulled By: bottler
fbshipit-source-id: a014344dd6cf2125f564f948a3c905ceb84cf994
Summary: This makes the new volumes tutorial work on google colab.
Reviewed By: kjchalup
Differential Revision: D38501906
fbshipit-source-id: a606a357e929dae903dc4d9067bd1519f05b1458
Summary: need to pip install visdom in new volumes tutorial.
Reviewed By: kjchalup
Differential Revision: D38501905
fbshipit-source-id: 534bf097e41f05b3389e9420e6dd2b61a4517861
Summary: Linear followed by exponential LR progression. Needed for making Blender scenes converge.
Reviewed By: kjchalup
Differential Revision: D38557007
fbshipit-source-id: ad630dbc5b8fabcb33eeb5bdeed5e4f31360bac2
Summary:
LLFF (and most/all non-synth datasets) will have no background/foreground distinction. Add support for data with no fg mask.
Also, we had a bug in stats loading, like this:
* Load stats
* One of the stats has a history of length 0
* That's fine, e.g. maybe it's fg_error but the dataset has no notion of fg/bg. So leave it as len 0
* Check whether all the stats have the same history length as an arbitrarily chosen "reference-stat"
* Ooops the reference-stat happened to be the stat with length 0
* assert (legit_stat_len == reference_stat_len (=0)) ---> failed assert
Also some minor fixes (from Jeremy's other diff) to support LLFF
Reviewed By: davnov134
Differential Revision: D38475272
fbshipit-source-id: 5b35ac86d1d5239759f537621f41a3aa4eb3bd68
Summary: In a multisequence (fewview) scenario, it does not make sense to use all cameras for evaluating the difficulty as they come from different scenes. Using only this batch’s source (known) cameras instead.
Reviewed By: bottler
Differential Revision: D38491070
fbshipit-source-id: d6312d8fbb125b28a33db9f53d4215bcd1ca28a8
Summary:
Request in https://github.com/facebookresearch/pytorch3d/issues/1233 for option to disable CUDA build.
Also option to disable binary build completely. This could be useful e.g. in the config tutorial where we need a small python-only part of pytorch3d.
Reviewed By: kjchalup
Differential Revision: D38458624
fbshipit-source-id: 421a0b1cc31306d7e322d3e743e30a7533d7f034
Summary:
Misc fixes.
- most important: the mac image is gone so switch to a newer one.
- torch.concat is new; was used accidentally
- remove lpips from testing in meta.yaml as it is breaking the conda test. Better to leave the relevant tests failing in OSS.
- TypedDict usage is breaking implicitron on Python 3.7.
Reviewed By: patricklabatut
Differential Revision: D38458164
fbshipit-source-id: b16c26453a743b9a771e2a6787b9a4d2a52e41c2
Summary: This field is specific to one purpose.
Reviewed By: patricklabatut
Differential Revision: D38424891
fbshipit-source-id: e017304497012430c30e436da7052b9ad6fc7614
Summary: One way to tidy the installation so we don't install files in site-packages/projects. Fixes https://github.com/facebookresearch/pytorch3d/issues/1279
Reviewed By: shapovalov, davnov134
Differential Revision: D38426772
fbshipit-source-id: ac1a54fbf230adb53904701e1f38bf9567f647ce
Summary: Don't copy from one part of config to another, rather do the copy within GenericModel.
Reviewed By: davnov134
Differential Revision: D38248828
fbshipit-source-id: ff8af985c37ea1f7df9e0aa0a45a58df34c3f893
Summary: Made the config system call open_dict when it calls the tweak function.
Reviewed By: shapovalov
Differential Revision: D38315334
fbshipit-source-id: 5924a92d8d0bf399bbf3788247f81fc990e265e7
Summary:
Stats are logically connected to the training loop, not to the model. Hence, moving to the training loop.
Also removing resume_epoch from OptimizerFactory in favor of a single place - ModelFactory. This removes the need for config consistency checks etc.
Reviewed By: kjchalup
Differential Revision: D38313475
fbshipit-source-id: a1d188a63e28459df381ff98ad8acdcdb14887b7
Summary: Before this diff, train_stats.py would not be created by default, EXCEPT when resuming training. This makes them appear from start.
Reviewed By: shapovalov
Differential Revision: D38320341
fbshipit-source-id: 8ea5b99ec81c377ae129f58e78dc2eaff94821ad
Summary: Remove the dataset's need to provide the task type.
Reviewed By: davnov134, kjchalup
Differential Revision: D38314000
fbshipit-source-id: 3805d885b5d4528abdc78c0da03247edb9abf3f7
Summary:
Added _NEED_CONTROL
to JsonIndexDatasetMapProviderV2 and made dataset_tweak_args use it.
Reviewed By: bottler
Differential Revision: D38313914
fbshipit-source-id: 529847571065dfba995b609a66737bd91e002cfe
Summary: Only import it if you ask for it.
Reviewed By: kjchalup
Differential Revision: D38327167
fbshipit-source-id: 3f05231f26eda582a63afc71b669996342b0c6f9
Summary: Made eval_batches be set in call to `__init__` not after the construction as they were before
Reviewed By: bottler
Differential Revision: D38275943
fbshipit-source-id: 32737401d1ddd16c284e1851b7a91f8b041c406f
Summary: Currently, seeds are set only inside the train loop. But this does not ensure that the model weights are initialized the same way everywhere which makes all experiments irreproducible. This diff fixes it.
Reviewed By: bottler
Differential Revision: D38315840
fbshipit-source-id: 3d2ecebbc36072c2b68dd3cd8c5e30708e7dd808
Summary: Make a dummy single-scene dataset using the code from generate_cow_renders (used in existing NeRF tutorials)
Reviewed By: kjchalup
Differential Revision: D38116910
fbshipit-source-id: 8db6df7098aa221c81d392e5cd21b0e67f65bd70
Summary:
This large diff rewrites a significant portion of Implicitron's config hierarchy. The new hierarchy, and some of the default implementation classes, are as follows:
```
Experiment
data_source: ImplicitronDataSource
dataset_map_provider
data_loader_map_provider
model_factory: ImplicitronModelFactory
model: GenericModel
optimizer_factory: ImplicitronOptimizerFactory
training_loop: ImplicitronTrainingLoop
evaluator: ImplicitronEvaluator
```
1) Experiment (used to be ExperimentConfig) is now a top-level Configurable and contains as members mainly (mostly new) high-level factory Configurables.
2) Experiment's job is to run factories, do some accelerate setup and then pass the results to the main training loop.
3) ImplicitronOptimizerFactory and ImplicitronModelFactory are new high-level factories that create the optimizer, scheduler, model, and stats objects.
4) TrainingLoop is a new configurable that runs the main training loop and the inner train-validate step.
5) Evaluator is a new configurable that TrainingLoop uses to run validation/test steps.
6) GenericModel is not the only model choice anymore. Instead, ImplicitronModelBase (by default instantiated with GenericModel) is a member of Experiment and can be easily replaced by a custom implementation by the user.
All the new Configurables are children of ReplaceableBase, and can be easily replaced with custom implementations.
In addition, I added support for the exponential LR schedule, updated the config files and the test, as well as added a config file that reproduces NERF results and a test to run the repro experiment.
Reviewed By: bottler
Differential Revision: D37723227
fbshipit-source-id: b36bee880d6aa53efdd2abfaae4489d8ab1e8a27
Summary:
This is an internal change in the config systen. It allows redefining a pluggable implementation with new default values. This is useful in notebooks / interactive use. For example, this now works.
class A(ReplaceableBase):
pass
registry.register
class B(A):
i: int = 4
class C(Configurable):
a: A
a_class_type: str = "B"
def __post_init__(self):
run_auto_creation(self)
expand_args_fields(C)
registry.register
class B(A):
i: int = 5
c = C()
assert c.a.i == 5
Reviewed By: shapovalov
Differential Revision: D38219371
fbshipit-source-id: 72911a9bd3426d3359cf8802cc016fc7f6d7713b
2022-07-28 09:39:18 -07:00
385 changed files with 19370 additions and 5200 deletions
stale-issue-message:'This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
stale-pr-message:'This PR is stale because it has been open 45 days with no activity. Remove stale label or comment or this will be closed in 10 days.'
close-issue-message:'This issue was closed because it has been stalled for 5 days with no activity.'
close-pr-message:'This PR was closed because it has been stalled for 10 days with no activity.'
The core library is written in PyTorch. Several components have underlying implementation in CUDA for improved performance. A subset of these components have CPU implementations in C++/PyTorch. It is advised to use PyTorch3D with GPU support in order to use all the features.
| [Fit Textured Volume in Implicitron](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/implicitron_volumes.ipynb)| [Implicitron Config System](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/implicitron_config_system.ipynb)|
@@ -93,6 +99,8 @@ In alphabetical order:
* Amitav Baruah
* Steve Branson
* Krzysztof Chalupka
* Jiali Duan
* Luya Gao
* Georgia Gkioxari
* Taylor Gordon
@@ -136,6 +144,12 @@ If you are using the pulsar backend for sphere-rendering (the `PulsarPointRender
Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under [`Releases`](https://github.com/facebookresearch/pytorch3d/releases), and the builds can be installed using `conda` as per the instructions in [INSTALL.md](INSTALL.md).
**[Aug 10th 2022]:** PyTorch3D [v0.7.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.0) released with Implicitron and MeshRasterizerOpenGL.
**[Apr 28th 2022]:** PyTorch3D [v0.6.2](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.6.2) released
**[Dec 16th 2021]:** PyTorch3D [v0.6.1](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.6.1) released
**[Oct 6th 2021]:** PyTorch3D [v0.6.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.6.0) released
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.