-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
435 lines (378 loc) · 17 KB
/
index.html
File metadata and controls
435 lines (378 loc) · 17 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="description"
content="A scene-level structure from motion dataset of applied to novel view synthesis.">
<meta name="keywords" content="MegaScenes, SfM, Structure from Motion, Dataset, Scene, NVS, Novel View Synthesis">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>MegaScenes</title>
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-1EWGMC7JTK"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-1EWGMC7JTK');
</script>
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
rel="stylesheet">
<link rel="stylesheet" href="./static/css/bulma.min.css">
<link rel="stylesheet" href="./static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="./static/css/bulma-slider.min.css">
<link rel="stylesheet" href="./static/css/fontawesome.all.min.css">
<link rel="stylesheet"
href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="./static/css/index.css">
<link rel="icon" href="data:image/svg+xml,
<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22>
<text y=%22.9em%22 font-size=%2290%22>🏛️</text>
</svg>"
>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script defer src="./static/js/fontawesome.all.min.js"></script>
<script src="./static/js/bulma-carousel.min.js"></script>
<script src="./static/js/bulma-slider.min.js"></script>
<script src="./static/js/index.js"></script>
</head>
<body>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column has-text-centered">
<h1 class="publication-title is-size-1 publication-title">MegaScenes: Scene-Level View Synthesis at Scale</h1>
<div class="is-size-5 publication-venue">ECCV 2024</div>
<div class="is-size-5 publication-authors">
<span class="author-block">
<a href="https://jot-jt.github.io/">Joseph Tung</a><sup>꘎1</sup>,</span>
<span class="author-block">
<a href="https://genechou.com/">Gene Chou</a><sup>꘎1</sup>,</span>
<span class="author-block">
<a href="https://www.cs.cornell.edu/~ruojin/">Ruojin Cai</a><sup>1</sup>,
</span>
<span class="author-block">
<a href="https://www.guandaoyang.com/">Guandao Yang</a><sup>2</sup>,
</span>
<span class="author-block">
<a href="https://kai-46.github.io/website/">Kai Zhang</a><sup>3</sup>,
</span>
<span class="author-block">
<a href="https://stanford.edu/~gordonwz/">Gordon Wetzstein</a><sup>2</sup>,
</span>
<span class="author-block">
<a href="https://www.cs.cornell.edu/~bharathh/">Bharath Hariharan</a><sup>1</sup>,
</span>
<span class="author-block">
<a href="https://www.cs.cornell.edu/~snavely/">Noah Snavely</a><sup>1</sup>
</span>
</div>
<div class="is-size-5 publication-authors">
<span class="author-block"><sup>1</sup>Cornell University,</span>
<span class="author-block"><sup>2</sup>Stanford University,</span>
<span class="author-block"><sup>3</sup>Adobe Research</span>
</div>
<div class="is-size-6">
<span class="author-block"><sup>꘎</sup>Equal Contribution</span>
</div>
<div class="column has-text-centered">
<div class="publication-links">
<!-- PDF Link. -->
<span class="link-block">
<a href="MegaScenes_paper_v1.pdf"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-file-pdf"></i>
</span>
<span>Paper</span>
</a>
</span>
<span class="link-block">
<a href="https://arxiv.org/abs/2406.11819"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="ai ai-arxiv"></i>
</span>
<span>arXiv</span>
</a>
</span>
<!-- Code Link. -->
<span class="link-block">
<a href="https://github.com/MegaScenes/nvs"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-github"></i>
</span>
<span>Code</span>
</a>
</span>
<!-- Dataset Link. -->
<span class="link-block">
<a href="https://github.com/MegaScenes/dataset"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="far fa-images"></i>
</span>
<span>Data</span>
</a>
</span>
<span class="link-block">
<a href="https://megascenes.github.io/web-viewer/"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="far fa-images"></i>
</span>
<span>Web Viewer</span>
</a>
</span>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<section class="hero teaser">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="./static/images/MegaScenes_teaser.jpg"
class="interpolation-image"
alt="Interpolate start reference image."/>
<h2 class="subtitle has-text-centered">
The <span class="dnerf">MegaScenes</span> Dataset is an extensive collection
of structure-from-motion reconstructions and internet images. It includes a diversity of scenes
like minarets, building interiors,
statues, bridges,
towers,
religious buildings,
and natural landscapes.
The images of these scenes are captured under varying conditions,
including different times of day, various weather and illumination,
and from different devices with distinct camera intrinsics.
</h2>
</div>
<div class="hero-body">
<video id="nvs_teaser" autoplay controls muted loop playsinline height="100%">
<source src="./static/videos/nvs_teaser.mp4"
type="video/mp4">
</video>
<h2 class="subtitle has-text-centered">
On the task of single-image novel view synthesis (NVS), we show that training on <span class="dnerf">MegaScenes</span> leads to generalization to in-the-wild scenes.
All videos shown here are generated using a single image as input, and none of the categories were seen during training.
</h2>
</div>
</div>
</section>
<section class="section hero is-light">
<div class="container is-max-desktop">
<!-- Abstract. -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
Scene-level novel view synthesis (NVS) is fundamental to many vision
and graphics applications. Recently, pose-conditioned diffusion
models have led to significant progress by extracting 3D information
from 2D foundation models, but these methods are limited by the lack
of scene-level training data. Common dataset choices either consist
of isolated objects (Objaverse), or of object-centric scenes with
limited pose distributions (DTU, CO3D).
In this paper, we create a
large-scale scene-level dataset from Internet photo collections,
called <span class="dnerf">MegaScenes</span>, which contains over 100K SfM reconstructions
from around the world. Internet photos represent a scalable data source
but come with challenges such as lighting and transient objects. We
address these issues to further create a subset suitable for the
task of NVS. Additionally, we analyze failure cases of
state-of-the-art NVS methods and significantly improve generation
consistency. Through extensive experiments we validate
the effectiveness of both our dataset and method on generating
in-the-wild scenes.
</p>
</div>
</div>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<!-- Abstract. -->
<div class="columns is-centered">
<div class="column is-four-fifths">
<div class="columns is-centered has-text-centered">
<h2 class="title is-3">Dataset Collection</h2>
</div>
<p>
We first source and identify potential scene categories from WikiData.
Subsequently, images and metadata for each scene category is downloaded.
Finally, we reconstruct scenes using Structure from Motion (SfM) and clean them using the Doppelgangers pipeline.
</p>
<img src="./static/images/dataset/dataset_pipeline.jpg"/>
</div>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<!-- Abstract. -->
<div class="columns is-centered">
<div class="column is-four-fifths">
<div class="columns is-centered has-text-centered">
<h2 class="title is-3">Dataset Statistics</h2>
</div>
<p>
We show the distribution of the MegaScenes Dataset.
On the left, we depict the frequency of scenes grouped by WikiData class.
This includes only select classes with more than 3,500 scenes; note that a single scene may be an instance of multiple classes.
On the right, we visualize the geospatial distribution of collected scenes worldwide.
</p>
<img src="./static/images/dataset/scene_class_combined_log.png"/>
</div>
</div>
</div>
</section>
<!-- <section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column is-four-fifths">
<div class="columns is-centered has-text-centered">
<h2 class="title is-3">Dataset Layout</h2>
</div>
<p>
layout: depends on how joseph you decide to organize the directories and subdirectories? e.g. grouped by first two letters
</p>
</div>
</div>
</div>
</section> -->
<section class="section" style="margin-bottom: 10px; padding-bottom: 0px;">
<div class="container is-max-desktop">
<!-- Abstract. -->
<div class="columns is-centered">
<div class="column is-four-fifths">
<div class="columns is-centered has-text-centered">
<h2 class="title is-3">Application: Single Image Novel View Synthesis</h2>
</div>
<p>
To explore the diversity and scale of the MegaScenes Dataset, we experiment on the task of single image novel view synthesis, where the goal is to take a reference image and generate a plausible image at a target pose. We train and evaluate on image pairs with pseudo-ground-truth relative poses obtained via SfM.
</p>
<!-- <img src="./static/images/dataset/scene_class_combined_log.png"/> -->
</div>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column">
<div class="content">
<h2 class="title is-4">Conditioning on the Extrinsic Matrix</h2>
<p>
Simply finetuning pose-conditioned diffusion models, such as ZeroNVS, signficantly improves their generalization to in-the-wild scenes. However, the depth and scale of the scene in ZeroNVS is ambiguous and requires manual tuning.
</p>
<img src="./static/images/nvs/zeronvs_example.png"/>
<figcaption>These scenes are unseen during training. ZeroNVS finetuned on MegaScenes, denoted ZeroNVS (MS), demonstrates stronger generalizability. However, when there are larger translation changes, such as zooming, ZeroNVS (MS) still fails. See the paper for more examples.</figcaption>
</div>
</div>
<div class="column">
<h2 class="title is-4">Conditioning on Warped Images</h2>
<div class="columns is-centered">
<div class="column content">
<p>
We find that first warping the image into the target pose is a strong condition that encodes how pixels are supposed to move, and is directly aligned with the scene scale. On our training and evaluation datasets, the scale is based on 3D SfM points. When given a random, in-the-wild image, we can determine the scene scale from estimated monocular depth and use the same extrinsics for conditioning and warping the image for a consistent scale.
</p>
<img src="./static/images/nvs/newwarpfigure.png"/>
</div>
</div>
</div>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<!-- Abstract. -->
<div class="columns is-centered">
<div class="column is-four-fifths">
<div class="columns is-centered has-text-centered">
<h2 class="title is-4">Evaluation</h2>
</div>
<p>
We evaluate on MegaScenes’ test set, which consists of in-the-wild scenes from Internet photos. Here, we show comparisons between four models. 1. SD-inpainting: A Stable Diffusion inpainting model without any finetuning. 2. ZeroNVS (released): The ZeroNVS released checkpoint. 3. ZeroNVS (MS): ZeroNVS finetuned on MegaScenes. 4. Ours: Finetuned from ZeroNVS on MegaScenes, and conditioned on both the extrinsic matrices and the warped images.
See the paper for more evaluations and baselines.
</p>
<img src="./static/images/nvs/qual_results.png" style="margin-top: 20px;"/>
<img src="./static/images/nvs/quant_results.png" style="margin-top: 20px;" />
</div>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<!-- Abstract. -->
<div class="columns is-centered">
<div class="column is-four-fifths">
<div class="columns is-centered has-text-centered">
<h2 class="title is-3">Discussion</h2>
</div>
<p>
MegaScenes is a general large-scale 3D dataset, and we foresee a variety of 3D-related applications that could benefit from MegaScenes, such as pose estimation, feature matching, and reconstruction. In this paper we focus on NVS as a representative application and we find that MegaScenes is indeed capable of training generalizable 3D models.
</p>
</div>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<!-- Abstract. -->
<div class="columns is-centered">
<div class="column is-four-fifths">
<div class="columns is-centered has-text-centered">
<h2 class="title is-3">Acknowledgments</h2>
</div>
<p>
We thank Brandon Li for building the COLMAP webviewer.
This work was funded in part by the National Science Foundation (IIS-2008313, IIS-2211259, IIS-2212084). Gene Chou was funded by an NSF Graduate Research Fellowship.
</p>
</div>
</div>
</div>
</section>
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title">BibTeX</h2>
<pre><code>
@inproceedings{
tung2024megascenes,
title={MegaScenes: Scene-Level View Synthesis at Scale},
author={Tung, Joseph and Chou, Gene and Cai, Ruojin and Yang, Guandao and Zhang, Kai and Wetzstein, Gordon and Hariharan, Bharath and Snavely, Noah},
booktitle={ECCV},
year={2024}
}
</code></pre>
</div>
</section>
<footer class="footer">
<div class="container">
<div class="content has-text-centered">
<a class="icon-link"
href="./MegaScenes_paper_v1.pdf">
<i class="fas fa-file-pdf"></i>
</a>
<a class="icon-link" href="https://github.com/MegaScenes/dataset" class="external-link" disabled>
<i class="fab fa-github"></i>
</a>
</div>
<div class="columns is-centered">
<div class="column is-8">
<div class="content">
<p>
This website is borrowed from <a href="https://nerfies.github.io/">Nerfies</a>.
</p>
</div>
</div>
</div>
</div>
</footer>
</body>
</html>