mirror of
https://github.com/facebookresearch/pytorch3d.git
synced 2025-12-14 11:26:24 +08:00
(breaking) image_size-agnostic GridRaySampler
Summary: As suggested in #802. By not persisting the _xy_grid buffer, we can allow (in some cases) a model with one image_size to be loaded from a saved model which was trained at a different resolution. Also avoid persisting _frequencies in HarmonicEmbedding for similar reasons. BC-break: This will cause load_state_dict, in strict mode, to complain if you try to load an old model with the new code. Reviewed By: patricklabatut Differential Revision: D30349234 fbshipit-source-id: d6061d1e51c9f79a78d61a9f732c9a5dfadbbb47
This commit is contained in:
committed by
Facebook GitHub Bot
parent
1251446383
commit
1b8d86a104
@@ -14,7 +14,7 @@ class HarmonicEmbedding(torch.nn.Module):
|
||||
omega0: float = 1.0,
|
||||
logspace: bool = True,
|
||||
include_input: bool = True,
|
||||
):
|
||||
) -> None:
|
||||
"""
|
||||
Given an input tensor `x` of shape [minibatch, ... , dim],
|
||||
the harmonic embedding layer converts each feature
|
||||
@@ -69,10 +69,10 @@ class HarmonicEmbedding(torch.nn.Module):
|
||||
dtype=torch.float32,
|
||||
)
|
||||
|
||||
self.register_buffer("_frequencies", omega0 * frequencies)
|
||||
self.register_buffer("_frequencies", omega0 * frequencies, persistent=False)
|
||||
self.include_input = include_input
|
||||
|
||||
def forward(self, x: torch.Tensor):
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
"""
|
||||
Args:
|
||||
x: tensor of shape [..., dim]
|
||||
|
||||
Reference in New Issue
Block a user