Summary: Fix indexing of directions after filtering of points by scaffold.
Reviewed By: shapovalov
Differential Revision: D40853482
fbshipit-source-id: 9cfdb981e97cb82edcd27632c5848537ed2c6837
Summary:
Allows loading of multiple categories.
Multiple categories are provided in a comma-separated list of category names.
Reviewed By: bottler, shapovalov
Differential Revision: D40803297
fbshipit-source-id: 863938be3aa6ffefe9e563aede4a2e9e66aeeaa8
Summary: Try to document implicitron. Most of this is autogenerated.
Reviewed By: shapovalov
Differential Revision: D40623742
fbshipit-source-id: 453508277903b7d987b1703656ba1ee09bc2c570
Summary: The bug lead to non-coinciding origins of the rays emitted from perspective cameras when unit_directions=True
Reviewed By: bottler
Differential Revision: D40865610
fbshipit-source-id: 398598e9e919b53e6bea179f0400e735bbb5b625
Summary: Some things fail if a parameter is not wraped; in particular, it prevented other tensors moving to GPU.
Reviewed By: bottler
Differential Revision: D40819932
fbshipit-source-id: a23b38ceacd7f0dc131cb0355fef1178e3e2f7fd
Summary: Add option to flat pad the last delta. Might to help when training on rgb only.
Reviewed By: shapovalov
Differential Revision: D40587475
fbshipit-source-id: c763fa38948600ea532c730538dc4ff29d2c3e0a
Summary: Make Implicitron run without visdom installed.
Reviewed By: shapovalov
Differential Revision: D40587974
fbshipit-source-id: dc319596c7a4d10a4c54c556dabc89ad9d25c2fb
Summary:
According to the profiler trace D40326775, _check_valid_rotation_matrix is slow because of aten::all_close operation and _safe_det_3x3 bottlenecks. Disable the check by default unless environment variable PYTORCH3D_CHECK_ROTATION_MATRICES is set to 1.
Comparison after applying the change:
```
Profiling/Function get_world_to_view (ms) Transform_points(ms) specular(ms)
before 12.751 18.577 21.384
after 4.432 (34.7%) 9.248 (49.8%) 11.507 (53.8%)
```
Profiling trace:
https://pxl.cl/2h687
More details in https://docs.google.com/document/d/1kfhEQfpeQToikr5OH9ZssM39CskxWoJ2p8DO5-t6eWk/edit?usp=sharing
Reviewed By: kjchalup
Differential Revision: D40442503
fbshipit-source-id: 954b58de47de235c9d93af441643c22868b547d0
Summary: Keep the cause of hydra errors visible in some more cases.
Reviewed By: shapovalov
Differential Revision: D40516202
fbshipit-source-id: 8d214be5cc808a37738add77cc305fe099788546
Summary:
Adds the ability to have different learning rates for different parts of the model. The trainable parts of the implicitron have a new member
param_groups: dictionary where keys are names of individual parameters,
or module’s members and values are the parameter group where the
parameter/member will be sorted to. "self" key is used to denote the
parameter group at the module level. Possible keys, including the "self" key
do not have to be defined. By default all parameters are put into "default"
parameter group and have the learning rate defined in the optimizer,
it can be overriden at the:
- module level with “self” key, all the parameters and child
module s parameters will be put to that parameter group
- member level, which is the same as if the `param_groups` in that
member has key=“self” and value equal to that parameter group.
This is useful if members do not have `param_groups`, for
example torch.nn.Linear.
- parameter level, parameter with the same name as the key
will be put to that parameter group.
And in the optimizer factory, parameters and their learning rates are recursively gathered.
Reviewed By: shapovalov
Differential Revision: D40145802
fbshipit-source-id: 631c02b8d79ee1c0eb4c31e6e42dbd3d2882078a
Summary:
Added initialization configuration for the last layer of the MLP decoding function. You can now set:
- last activation function (tensorf uses sigmoid)
- last bias init (tensorf uses 0, because of sigmoid ofc)
- option to use xavier initialization (we use relu so this should not be set)
Reviewed By: davnov134
Differential Revision: D40304981
fbshipit-source-id: ec398eb2235164ae85cb7c09b9660e843490ea04
Summary:
Small config system fix. Allows get_default_args to work on an instance which has been created with a dict (instead of a DictConfig) as an args field. E.g.
```
gm = GenericModel(
raysampler_AdaptiveRaySampler_args={"scene_extent": 4.0}
)
OmegaConf.structured(gm1)
```
Reviewed By: shapovalov
Differential Revision: D40341047
fbshipit-source-id: 587d0e8262e271df442a80858949a48e5d6db3df
Summary: Tensorf does relu or softmax after the density grid. This diff adds the ability to replicate that.
Reviewed By: bottler
Differential Revision: D40023228
fbshipit-source-id: 9f19868cd68460af98ab6e61c7f708158c26dc08
Summary: More helpful errors when the output channels aren't 1 for density and 3 for color
Reviewed By: shapovalov
Differential Revision: D40341088
fbshipit-source-id: 6074bf7fefe11c8e60fee4db2760b776419bcfee
Summary: Couldn't build p3d on devfair because C++17 is unsupported. Two structured bindings sneaked in.
Reviewed By: bottler
Differential Revision: D40280967
fbshipit-source-id: 9627f3f9f76247a6cefbeac067fdead67c6f4e14
Summary:
TensoRF at step 2000 does volume croping and resizing.
At those steps it calculates part of the voxel grid which has density big enough to have objects and resizes the grid to fit that object.
Change is done on 3 levels:
- implicit function subscribes to epochs and at specific epochs finds the bounding box of the object and calls resizing of the color and density voxel grids to fit it
- VoxelGrid module calls cropping of the underlaying voxel grid and resizing to fit previous size it also adjusts its extends and translation to match wanted size
- Each voxel grid has its own way of cropping the underlaying data
Reviewed By: kjchalup
Differential Revision: D39854548
fbshipit-source-id: 5435b6e599aef1eaab980f5421d3369ee4829c50
Summary:
avoid creating a numpy array of random things just to split it: this can now generate a warning e.g. if the list contains lists of varying lengths. There might also be a performance win here, and we could do more of the same if we care about that.
(The vanilla way to avoid the new warning is to replace `np.split(a,` with `np.split(np.array(a, dtype=object), ` btw.)
Reviewed By: shapovalov
Differential Revision: D40209308
fbshipit-source-id: daae33a23ceb444e8e7241f72ce1525593e2f239
Summary: Forward method is sped up using the scaffold, a low resolution voxel grid which is used to filter out the points in empty space. These points will be predicted as having 0 density and (0, 0, 0) color. The points which were not evaluated as empty space will be passed through the steps outlined above.
Reviewed By: kjchalup
Differential Revision: D39579671
fbshipit-source-id: 8eab8bb43ef77c2a73557efdb725e99a6c60d415
Summary: Avoids use of `torch.cat` operation when rendering a volume by instead issuing multiple calls to `torch.nn.functional.grid_sample`. Density and color tensors can be large.
Reviewed By: bottler
Differential Revision: D40072399
fbshipit-source-id: eb4cd34f6171d54972bbf2877065f973db497de0
Summary:
Torch C++ extension for Marching Cubes
- Add torch C++ extension for marching cubes. Observe a speed up of ~255x-324x speed up (over varying batch sizes and spatial resolutions)
- Add C++ impl in existing unit-tests.
(Note: this ignores all push blocking failures!)
Reviewed By: kjchalup
Differential Revision: D39590638
fbshipit-source-id: e44d2852a24c2c398e5ea9db20f0dfaa1817e457
Summary: Overhaul of marching_cubes_naive for better performance and to avoid relying on unstable hashing. In particular, instead of hashing vertex positions, we index each interpolated vertex with its corresponding edge in the 3d grid.
Reviewed By: kjchalup
Differential Revision: D39419642
fbshipit-source-id: b5fede3525c545d1d374198928dfb216262f0ec0
Summary:
Threaded the for loop:
```
for (int yi = 0; yi < H; ++yi) {...}
```
in function `RasterizeMeshesNaiveCpu()`.
Chunk size is approx equal.
Reviewed By: bottler
Differential Revision: D40063604
fbshipit-source-id: 09150269405538119b0f1b029892179501421e68
Summary: Loads the whole dataset and moves it to the device and sends it to for sampling to enable full dataset heterogeneous raysampling.
Reviewed By: bottler
Differential Revision: D39263009
fbshipit-source-id: c527537dfc5f50116849656c9e171e868f6845b1
Summary:
Changed ray_sampler and metrics to be able to use mixed frame raysampling.
Ray_sampler now has a new member which it passes to the pytorch3d raysampler.
If the raybundle is heterogeneous metrics now samples images by padding xys first. This reduces memory consumption.
Reviewed By: bottler, kjchalup
Differential Revision: D39542221
fbshipit-source-id: a6fec23838d3049ae5c2fd2e1f641c46c7c927e3
Summary: new implicitronRayBundle with added cameraIDs and camera counts. Added to enable a single raybundle inside Implicitron and easier extension in the future. Since RayBundle is named tuple and RayBundleHeterogeneous is dataclass and RayBundleHeterogeneous cannot inherit RayBundle. So if there was no ImplicitronRayBundle every function that uses RayBundle now would have to use Union[RayBundle, RaybundleHeterogeneous] which is confusing and unecessary complicated.
Reviewed By: bottler, kjchalup
Differential Revision: D39262999
fbshipit-source-id: ece160e32f6c88c3977e408e966789bf8307af59
Summary:
Added heterogeneous raysampling to pytorch3d raysampler, different cameras are sampled different number of times.
It now returns RayBundle if heterogeneous raysampling is off and new RayBundleHeterogeneous (with added fields `camera_ids` and `camera_counts`). Heterogeneous raysampling is on if `n_rays_total` is not None.
Reviewed By: bottler
Differential Revision: D39542222
fbshipit-source-id: d3d88d822ec7696e856007c088dc36a1cfa8c625
Summary:
This is quite a thin wrapper – not sure we need it. The motivation is that `Transform3d` is not as matrix-centric now, it can be converted to SE(3) logarithm equally easily.
It simplifies things like averaging cameras and getting axis-angle of camera rotation (previously, one would need to call `se3_log_map(cameras.get_world_to_camera_transform().get_matrix())`), now one fewer thing to call / discover.
Reviewed By: bottler
Differential Revision: D39928000
fbshipit-source-id: 85248d5b8af136618f1d08791af5297ea5179d19
Summary:
`get_rotation_to_best_fit_xy` is useful to expose externally, however there was a bug (which we probably did not care about for our use case): it could return a rotation matrix with det(R) == −1.
The diff fixes that, and also makes centroid optional (it can be computed from points).
Reviewed By: bottler
Differential Revision: D39926791
fbshipit-source-id: 5120c7892815b829f3ddcc23e93d4a5ec0ca0013
Summary: Any module can be subscribed to step updates from the training loop. Once the training loop publishes a step the voxel grid changes its dimensions. During the construction of VoxelGridModule and its parameters it does not know which is the resolution that will be loaded from checkpoint, so before the checkpoint loading a hook runs which changes the VoxelGridModule's parameters to match shapes of the loaded checkpoint.
Reviewed By: bottler
Differential Revision: D39026775
fbshipit-source-id: 0d359ea5c8d2eda11d773d79c7513c83585d5f17
Summary:
User reported that cloned cameras fail to save. The error with latest PyTorch is
```
pickle.PicklingError: Can't pickle ~T_destination: attribute lookup T_destination on torch.nn.modules.module failed
```
This fixes it.
Reviewed By: btgraham
Differential Revision: D39692258
fbshipit-source-id: 75bbf3b8dfa0023dc28bf7d4cc253ca96e46a64d
Summary:
We need to make packing/unpacking in 2 places for mixed frame raysampling (metrics and raysampler) but those tensors that need to be unpacked/packed have more than two dimensions.
I could have reshaped and stored dimensions but this seems to just complicate code there with something which packed_to_padded should support.
I could have made a separate function for implicitron but it would confusing to have two different padded_to_packed functions inside pytorch3d codebase one of which does packing for (b, max) and (b, max, f) and the other for (b, max, …)
Reviewed By: bottler
Differential Revision: D39729026
fbshipit-source-id: 2bdebf290dcc6c316b7fe1aeee49bbb5255e508c
Summary: The implicit function and its members and internal working
Reviewed By: kjchalup
Differential Revision: D38829764
fbshipit-source-id: 28394fe7819e311ed52c9defc9a1b29f37fbc495