Summary:
A couple of tests are failing in open source after my changes yesterday because of numerical issues calculating normals. In particular we have meshes with very few vertices and several faces, where the normals should be zero but end up non-negligible after F.normalize. I have no idea why the different environments produce different results, so that the tests are passing internally.
An example. Consider a mesh with the following faces:
```
tensor([[4, 0, 2],
[4, 1, 2],
[3, 1, 0],
[1, 3, 1],
[3, 0, 1],
[4, 0, 0],
[4, 0, 2]])
```
At vertex 3, there is one zero-area face and there are two other faces, which are back-to-back with each other. This vertex should have zero normal. The open source calculation produces a small but nonzero normal which varies unpredictably with changes in scale/offset, which can cause test failures.
In this diff, the main change is to increase the number of vertices to make this less likely to happen. Also a small change to init_mesh to make it not generate a batch of empty meshes.
Reviewed By: nikhilaravi
Differential Revision: D28220984
fbshipit-source-id: 79fdc62e5f5f8836de5a3a9980cfd6fe44590359