Summary: Adds the readme file.

Reviewed By: nikhilaravi

Differential Revision: D25684459

fbshipit-source-id: f1aaa621a2a67c98d5fcfe33fe9bbfea8f95b537
This commit is contained in:
David Novotny 2021-02-02 05:42:59 -08:00 committed by Facebook GitHub Bot
parent 2628fb56f2
commit 51de308b80
7 changed files with 87 additions and 5 deletions

79
projects/nerf/README.md Normal file
View File

@ -0,0 +1,79 @@
Neural Radiance Fields in PyTorch3D
===================================
This project implements the Neural Radiance Fields (NeRF) from [1].
Installation
------------
1) [Install PyTorch3D](https://github.com/facebookresearch/pytorch3d/blob/master/INSTALL.md)
- Note that this repo requires `PyTorch` version `>= v1.6.0` due to dependency on `torch.searchsorted`.
2) Install other dependencies:
- [`visdom`](https://github.com/facebookresearch/visdom)
- [`hydra`](https://github.com/facebookresearch/hydra)
- [`Pillow`](https://python-pillow.org/)
- [`requests`](https://pypi.org/project/requests/)
E.g. using `pip`:
```
pip install visdom
pip install hydra-core --upgrade
pip install Pillow
pip install requests
```
Exporting videos further requires a working `ffmpeg`.
Training NeRF
-------------
```
python ./train_nerf.py --config-name lego
```
will train the model from [1] on the Lego dataset.
Note that the script outputs visualizations to `Visdom`. In order to enable this, make sure to start the visdom server (before launching the training) with the following command:
```
python -m visdom.server
```
Note that training on the "lego" scene takes roughly 24 hours on a single Tesla V100.
#### Training data
Note that the `train_nerf.py` script will automatically download the relevant dataset in case it is missing.
Testing NeRF
------------
```
python ./test_nerf.py --config-name lego
```
Will load a trained model from the `./checkpoints` directory and evaluate it on the test split of the corresponding dataset (Lego in the case above).
### Exporting multi-view video of the radience field
Furthermore, the codebase supports generating videos of the neural radiance field.
The following generates a turntable video of the Lego scene:
```
python ./test_nerf.py --config-name=lego test.mode='export_video'
```
Note that this requires a working `ffmpeg` for generating the video from exported frames.
Additionally, note that generation of the video in the original resolution is quite slow. In order to speed up the process, one can decrease the resolution of the output video by setting the `data.image_size` flag:
```
python ./test_nerf.py --config-name=lego test.mode='export_video' data.image_size="[128,128]"
```
This will generate the video in a lower `128 x 128` resolution.
Training & testing on other datasets
------------------------------------
Currently we support the following datasets:
- lego `python ./train_nerf.py --config-name lego`
- fern `python ./train_nerf.py --config-name fern`
- pt3logo `python ./train_nerf.py --config-name pt3logo`
The dataset files are located in the following public S3 bucket:
https://dl.fbaipublicfiles.com/pytorch3d_nerf_data
Attribution: `lego` and `fern` are data from the original code release of [1] in https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1, which are hosted under the CC-BY license (https://creativecommons.org/licenses/by/4.0/) The S3 bucket files contains the same images while the camera matrices have been adjusted to follow the PyTorch3D convention.
#### References
[1] Ben Mildenhall and Pratul P. Srinivasan and Matthew Tancik and Jonathan T. Barron and Ravi Ramamoorthi and Ren Ng, NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, ECCV2020

View File

@ -1,6 +1,6 @@
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
import os
from typing import Tuple, Optional, List
from typing import List, Optional, Tuple
import numpy as np
import requests
@ -9,6 +9,7 @@ from PIL import Image
from pytorch3d.renderer import PerspectiveCameras
from torch.utils.data import Dataset
DEFAULT_DATA_ROOT = os.path.join(
os.path.dirname(os.path.realpath(__file__)), "..", "data"
)

View File

@ -3,7 +3,7 @@ import math
from typing import Tuple
import torch
from pytorch3d.renderer import look_at_view_transform, PerspectiveCameras
from pytorch3d.renderer import PerspectiveCameras, look_at_view_transform
def generate_eval_video_cameras(

View File

@ -1,5 +1,5 @@
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
from typing import Tuple, List, Optional
from typing import List, Optional, Tuple
import torch
from pytorch3d.renderer import ImplicitRenderer, ray_bundle_to_ray_points
@ -11,7 +11,7 @@ from visdom import Visdom
from .implicit_function import NeuralRadianceField
from .raymarcher import EmissionAbsorptionNeRFRaymarcher
from .raysampler import NeRFRaysampler, ProbabilisticRaysampler
from .utils import sample_images_at_mc_locs, calc_psnr, calc_mse
from .utils import calc_mse, calc_psnr, sample_images_at_mc_locs
class RadianceFieldRenderer(torch.nn.Module):

View File

@ -3,7 +3,7 @@ import math
from typing import List
import torch
from pytorch3d.renderer import RayBundle, NDCGridRaysampler, MonteCarloRaysampler
from pytorch3d.renderer import MonteCarloRaysampler, NDCGridRaysampler, RayBundle
from pytorch3d.renderer.cameras import CamerasBase
from .utils import sample_pdf

View File

@ -13,6 +13,7 @@ from nerf.stats import Stats
from omegaconf import DictConfig
from PIL import Image
CONFIG_DIR = os.path.join(os.path.dirname(os.path.realpath(__file__)), "configs")

View File

@ -14,6 +14,7 @@ from nerf.stats import Stats
from omegaconf import DictConfig
from visdom import Visdom
CONFIG_DIR = os.path.join(os.path.dirname(os.path.realpath(__file__)), "configs")