Not normalising control points by X.std()

Summary:
davnov134 found that the algorithm crashes if X is an axis-aligned plane. This is because I implemented scaling control points by `X.std()` as a poor man’s version of PCA whitening.
I checked that it does not bring consistent improvements, so let’s get rid of it.

The algorithm still results in slightly higher errors on the axis aligned planes but at least it does not crash. As a next step, I will experiment with detecting a planar case and using 3-point barycentric coordinates rather than 4-points.

Reviewed By: davnov134

Differential Revision: D21179968

fbshipit-source-id: 1f002fce5541934486b51808be0e910324977222
This commit is contained in:
Roman Shapovalov 2020-04-23 06:02:36 -07:00 committed by Facebook GitHub Bot
parent 9f31a4fd46
commit 54b482bd66
2 changed files with 2 additions and 2 deletions

View File

@ -6,6 +6,7 @@ from .graph_conv import GraphConv
from .knn import knn_gather, knn_points
from .mesh_face_areas_normals import mesh_face_areas_normals
from .packed_to_padded import packed_to_padded, padded_to_packed
from .perspective_n_points import efficient_pnp
from .points_alignment import corresponding_points_alignment, iterative_closest_point
from .points_normals import (
estimate_pointcloud_local_coord_frames,

View File

@ -34,11 +34,10 @@ def _define_control_points(x, weight, storage_opts=None):
"""
storage_opts = storage_opts or {}
x_mean = oputil.wmean(x, weight)
x_std = oputil.wmean((x - x_mean) ** 2, weight) ** 0.5
c_world = F.pad(torch.eye(3, **storage_opts), (0, 0, 0, 1), value=0.0).expand_as(
x[:, :4, :]
)
return c_world * x_std + x_mean
return c_world + x_mean
def _compute_alphas(x, c_world):