Skip to content

Commit bcb6700

Browse files
bmildyenchenlin
bmild
authored andcommitted
v0.1 release to public
0 parents  commit bcb6700

35 files changed

+2698
-0
lines changed

Diff for: .gitignore

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
**/.ipynb_checkpoints
2+
**/__pycache__
3+
*.png
4+
*.mp4
5+
*.npy
6+
*.npz
7+
*.dae
8+
data/*
9+
logs/*

Diff for: .gitmodules

Whitespace-only changes.

Diff for: LICENSE

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) 2020 bmild
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

Diff for: README.md

+164
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,164 @@
1+
# NeRF-pytorch
2+
3+
4+
[NeRF](http://www.matthewtancik.com/nerf) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are some videos generated by this repository (pre-trained models are provided below):
5+
6+
![](https://user-images.githubusercontent.com/7057863/78472232-cf374a00-7769-11ea-8871-0bc710951839.gif)
7+
![](https://user-images.githubusercontent.com/7057863/78472235-d1010d80-7769-11ea-9be9-51365180e063.gif)
8+
9+
This project is a faithful PyTorch implementation of [NeRF](http://www.matthewtancik.com/nerf) that **reproduces** the results while running **1.3 times faster**. The code is tested to match authors' Tensorflow implementation [here](https://github.com/bmild/nerf) numerically.
10+
11+
## Installation
12+
13+
```
14+
git clone https://github.com/yenchenlin/nerf-pytorch.git
15+
cd nerf-pytorch
16+
pip install -r requirements.txt
17+
cd torchsearchsorted
18+
pip install .
19+
cd ../
20+
```
21+
22+
<details>
23+
<summary> Dependencies (click to expand) </summary>
24+
25+
## Dependencies
26+
- PyTorch 1.4
27+
- matplotlib
28+
- numpy
29+
- imageio
30+
- imageio-ffmpeg
31+
- configargparse
32+
33+
The LLFF data loader requires ImageMagick.
34+
35+
You will also need the [LLFF code](http://github.com/fyusion/llff) (and COLMAP) set up to compute poses if you want to run on your own real data.
36+
37+
</details>
38+
39+
## How To Run?
40+
41+
### Quick Start
42+
43+
Download data for two example datasets: `lego` and `fern`
44+
```
45+
bash download_example_data.sh
46+
```
47+
48+
To train a low-res `lego` NeRF:
49+
```
50+
python run_nerf_torch.py --config configs/config_lego.txt
51+
```
52+
After training for 100k iterations (~4 hours on a single 2080 Ti), you can find the following video at `logs/lego_test/lego_test_spiral_100000_rgb.mp4`.
53+
54+
![](https://user-images.githubusercontent.com/7057863/78473103-9353b300-7770-11ea-98ed-6ba2d877b62c.gif)
55+
56+
---
57+
58+
To train a low-res `fern` NeRF:
59+
```
60+
python run_nerf_torch.py --config configs/config_fern.txt
61+
```
62+
After training for 200k iterations (~8 hours on a single 2080 Ti), you can find the following video at `logs/fern_test/fern_test_spiral_200000_rgb.mp4` and `logs/fern_test/fern_test_spiral_200000_disp.mp4`
63+
64+
![](https://user-images.githubusercontent.com/7057863/78473081-58ea1600-7770-11ea-92ce-2bbf6a3f9add.gif)
65+
66+
---
67+
68+
### More Datasets
69+
To play with other scenes presented in the paper, download the data [here](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1). Place the downloaded dataset according to the following directory structure:
70+
```
71+
├── configs
72+
│   ├── ...
73+
│  
74+
├── data
75+
│   ├── nerf_llff_data
76+
│   │   └── fern
77+
│   │  └── flower # downloaded llff dataset
78+
│   │  └── horns # downloaded llff dataset
79+
| | └── ...
80+
| ├── nerf_synthetic
81+
| | └── lego
82+
| | └── ship # downloaded synthetic dataset
83+
| | └── ...
84+
```
85+
86+
---
87+
88+
To train NeRF on different datasets:
89+
90+
```
91+
python run_nerf_torch.py --config configs/config_{DATASET}.txt
92+
```
93+
94+
replace `{DATASET}` with `trex` | `horns` | `flower` | `fortress` | `lego` | etc.
95+
96+
---
97+
98+
To test NeRF trained on different datasets:
99+
100+
```
101+
python run_nerf_torch.py --config configs/config_{DATASET}.txt --render_only
102+
```
103+
104+
replace `{DATASET}` with `trex` | `horns` | `flower` | `fortress` | `lego` | etc.
105+
106+
107+
### Pre-trained Models
108+
109+
You can download the pre-trained models [here](https://drive.google.com/drive/folders/1jIr8dkvefrQmv737fFm2isiT6tqpbTbv?usp=sharing). Place the downloaded directory in `./logs` in order to test it later. See the following directory structure for an example:
110+
111+
```
112+
├── logs
113+
│   ├── fern_test
114+
│   ├── flower_test # downloaded logs
115+
│ ├── trex_test # downloaded logs
116+
```
117+
118+
### Reproducibility
119+
120+
Tests that ensure the results of all functions and training loop match the official implentation are contained in a different branch `reproduce`. One can check it out and run the tests:
121+
```
122+
git checkout reproduce
123+
py.test
124+
```
125+
126+
## Method
127+
128+
[NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis](http://tancik.com/nerf)
129+
[Ben Mildenhall](https://people.eecs.berkeley.edu/~bmild/)\*<sup>1</sup>,
130+
[Pratul P. Srinivasan](https://people.eecs.berkeley.edu/~pratul/)\*<sup>1</sup>,
131+
[Matthew Tancik](http://tancik.com/)\*<sup>1</sup>,
132+
[Jonathan T. Barron](http://jonbarron.info/)<sup>2</sup>,
133+
[Ravi Ramamoorthi](http://cseweb.ucsd.edu/~ravir/)<sup>3</sup>,
134+
[Ren Ng](https://www2.eecs.berkeley.edu/Faculty/Homepages/yirenng.html)<sup>1</sup> <br>
135+
<sup>1</sup>UC Berkeley, <sup>2</sup>Google Research, <sup>3</sup>UC San Diego
136+
\*denotes equal contribution
137+
138+
<img src='imgs/pipeline.jpg'/>
139+
140+
> A neural radiance field is a simple fully connected network (weights are ~5MB) trained to reproduce input views of a single scene using a rendering loss. The network directly maps from spatial location and viewing direction (5D input) to color and opacity (4D output), acting as the "volume" so we can use volume rendering to differentiably render new views
141+
142+
143+
## Citation
144+
Kudos to the authors for their amazing results:
145+
```
146+
@misc{mildenhall2020nerf,
147+
title={NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis},
148+
author={Ben Mildenhall and Pratul P. Srinivasan and Matthew Tancik and Jonathan T. Barron and Ravi Ramamoorthi and Ren Ng},
149+
year={2020},
150+
eprint={2003.08934},
151+
archivePrefix={arXiv},
152+
primaryClass={cs.CV}
153+
}
154+
```
155+
156+
However, if you find this implementation or pre-trained models helpful, please consider to cite:
157+
```
158+
@misc{lin2020nerfpytorch,
159+
title={NeRF-pytorch},
160+
author={Yen-Chen, Lin},
161+
howpublished={\url{https://github.com/yenchenlin/nerf-pytorch/}},
162+
year={2020}
163+
}
164+
```

Diff for: configs/config_fern.txt

+15
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
expname = fern_test
2+
basedir = ./logs
3+
datadir = ./data/nerf_llff_data/fern
4+
dataset_type = llff
5+
6+
factor = 8
7+
llffhold = 8
8+
9+
N_rand = 1024
10+
N_samples = 64
11+
N_importance = 64
12+
13+
use_viewdirs = True
14+
raw_noise_std = 1e0
15+

Diff for: configs/config_flower.txt

+15
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
expname = flower_test
2+
basedir = ./logs
3+
datadir = ./data/nerf_llff_data/flower
4+
dataset_type = llff
5+
6+
factor = 8
7+
llffhold = 8
8+
9+
N_rand = 1024
10+
N_samples = 64
11+
N_importance = 64
12+
13+
use_viewdirs = True
14+
raw_noise_std = 1e0
15+

Diff for: configs/config_fortress.txt

+15
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
expname = fortress_test
2+
basedir = ./logs
3+
datadir = ./data/nerf_llff_data/fortress
4+
dataset_type = llff
5+
6+
factor = 8
7+
llffhold = 8
8+
9+
N_rand = 1024
10+
N_samples = 64
11+
N_importance = 64
12+
13+
use_viewdirs = True
14+
raw_noise_std = 1e0
15+

Diff for: configs/config_horns.txt

+15
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
expname = horns_test
2+
basedir = ./logs
3+
datadir = ./data/nerf_llff_data/horns
4+
dataset_type = llff
5+
6+
factor = 8
7+
llffhold = 8
8+
9+
N_rand = 1024
10+
N_samples = 64
11+
N_importance = 64
12+
13+
use_viewdirs = True
14+
raw_noise_std = 1e0
15+

Diff for: configs/config_lego.txt

+15
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
expname = lego_test
2+
basedir = ./logs
3+
datadir = ./data/nerf_synthetic/lego
4+
dataset_type = blender
5+
6+
half_res = True
7+
8+
N_samples = 64
9+
N_importance = 64
10+
11+
use_viewdirs = True
12+
13+
white_bkgd = True
14+
15+
N_rand = 1024

Diff for: configs/config_trex.txt

+15
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
expname = trex_test
2+
basedir = ./logs
3+
datadir = ./data/nerf_llff_data/trex
4+
dataset_type = llff
5+
6+
factor = 8
7+
llffhold = 8
8+
9+
N_rand = 1024
10+
N_samples = 64
11+
N_importance = 64
12+
13+
use_viewdirs = True
14+
raw_noise_std = 1e0
15+

Diff for: download_example_data.sh

+6
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
wget https://people.eecs.berkeley.edu/~bmild/nerf/tiny_nerf_data.npz
2+
mkdir -p data
3+
cd data
4+
wget https://people.eecs.berkeley.edu/~bmild/nerf/nerf_example_data.zip
5+
unzip nerf_example_data.zip
6+
cd ..

Diff for: imgs/pipeline.jpg

342 KB
Loading

Diff for: load_blender.py

+91
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
import os
2+
import torch
3+
import numpy as np
4+
import imageio
5+
import json
6+
import torch.nn.functional as F
7+
import cv2
8+
9+
10+
trans_t = lambda t : torch.Tensor([
11+
[1,0,0,0],
12+
[0,1,0,0],
13+
[0,0,1,t],
14+
[0,0,0,1]]).float()
15+
16+
rot_phi = lambda phi : torch.Tensor([
17+
[1,0,0,0],
18+
[0,np.cos(phi),-np.sin(phi),0],
19+
[0,np.sin(phi), np.cos(phi),0],
20+
[0,0,0,1]]).float()
21+
22+
rot_theta = lambda th : torch.Tensor([
23+
[np.cos(th),0,-np.sin(th),0],
24+
[0,1,0,0],
25+
[np.sin(th),0, np.cos(th),0],
26+
[0,0,0,1]]).float()
27+
28+
29+
def pose_spherical(theta, phi, radius):
30+
c2w = trans_t(radius)
31+
c2w = rot_phi(phi/180.*np.pi) @ c2w
32+
c2w = rot_theta(theta/180.*np.pi) @ c2w
33+
c2w = torch.Tensor(np.array([[-1,0,0,0],[0,0,1,0],[0,1,0,0],[0,0,0,1]])) @ c2w
34+
return c2w
35+
36+
37+
def load_blender_data(basedir, half_res=False, testskip=1):
38+
splits = ['train', 'val', 'test']
39+
metas = {}
40+
for s in splits:
41+
with open(os.path.join(basedir, 'transforms_{}.json'.format(s)), 'r') as fp:
42+
metas[s] = json.load(fp)
43+
44+
all_imgs = []
45+
all_poses = []
46+
counts = [0]
47+
for s in splits:
48+
meta = metas[s]
49+
imgs = []
50+
poses = []
51+
if s=='train' or testskip==0:
52+
skip = 1
53+
else:
54+
skip = testskip
55+
56+
for frame in meta['frames'][::skip]:
57+
fname = os.path.join(basedir, frame['file_path'] + '.png')
58+
imgs.append(imageio.imread(fname))
59+
poses.append(np.array(frame['transform_matrix']))
60+
imgs = (np.array(imgs) / 255.).astype(np.float32) # keep all 4 channels (RGBA)
61+
poses = np.array(poses).astype(np.float32)
62+
counts.append(counts[-1] + imgs.shape[0])
63+
all_imgs.append(imgs)
64+
all_poses.append(poses)
65+
66+
i_split = [np.arange(counts[i], counts[i+1]) for i in range(3)]
67+
68+
imgs = np.concatenate(all_imgs, 0)
69+
poses = np.concatenate(all_poses, 0)
70+
71+
H, W = imgs[0].shape[:2]
72+
camera_angle_x = float(meta['camera_angle_x'])
73+
focal = .5 * W / np.tan(.5 * camera_angle_x)
74+
75+
render_poses = torch.stack([pose_spherical(angle, -30.0, 4.0) for angle in np.linspace(-180,180,40+1)[:-1]], 0)
76+
77+
if half_res:
78+
H = H//2
79+
W = W//2
80+
focal = focal/2.
81+
82+
imgs_half_res = np.zeros((imgs.shape[0], H, W, 4))
83+
for i, img in enumerate(imgs):
84+
imgs_half_res[i] = cv2.resize(img, (H, W), interpolation=cv2.INTER_AREA)
85+
imgs = imgs_half_res
86+
# imgs = tf.image.resize_area(imgs, [400, 400]).numpy()
87+
88+
89+
return imgs, poses, render_poses, [H, W, focal], i_split
90+
91+

0 commit comments

Comments
 (0)