Skip to content

Commit

Permalink
fixed some bugs
Browse files Browse the repository at this point in the history
  • Loading branch information
subash-khanal committed Oct 24, 2024
1 parent 48cfb9a commit d9577e7
Showing 1 changed file with 7 additions and 14 deletions.
21 changes: 7 additions & 14 deletions PSM/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ <h1 class="title is-1 publication-title">PSM: Learning Probabilistic Embeddings
</div>

<div class="is-size-5 publication-authors">
<span class="author-block">Washington University<br>ACM Multimedia, 2024</span>
<span class="author-block">Washington University in St. Louis<br>ACM Multimedia, 2024</span>
</div>

<div class="column has-text-centered">
Expand Down Expand Up @@ -145,7 +145,7 @@ <h1 class="title is-1 publication-title">PSM: Learning Probabilistic Embeddings
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
A soundscape is defined by the acoustic environment a person perceives at a location. In this work, we propose a framework for mapping soundscapes across the Earth. Since soundscapes involve sound distributions that span varying spatial scales, we represent locations with multi-scale satellite imagery and learn a joint rep- resentation among this imagery, audio, and text. To capture the inherent uncertainty in the soundscape of a location, we design the representation space to be probabilistic. We also fuse ubiqui- tous metadata (including geolocation, time, and data source) to enable learning of spatially and temporally dynamic representa- tions of soundscapes. We demonstrate the utility of our framework by creating large-scale soundscape maps integrating both audio and text with temporal control. To facilitate future research on this task, we also introduce a large-scale dataset, GeoSound, contain- ing over 300𝑘 geotagged audio samples paired with both low- and high-resolution satellite imagery. We demonstrate that our method outperforms the existing state-of-the-art on both GeoSound and the existing SoundingEarth dataset.
A soundscape is defined by the acoustic environment a person perceives at a location. In this work, we propose a framework for mapping soundscapes across the Earth. Since soundscapes involve sound distributions that span varying spatial scales, we represent locations with multi-scale satellite imagery and learn a joint representation among this imagery, audio, and text. To capture the inherent uncertainty in the soundscape of a location, we design the representation space to be probabilistic. We also fuse ubiquitous metadata (including geolocation, time, and data source) to enable learning of spatially and temporally dynamic representations of soundscapes. We demonstrate the utility of our framework by creating large-scale soundscape maps integrating both audio and text with temporal control. To facilitate future research on this task, we also introduce a large-scale dataset, GeoSound, contain- ing over 300𝑘 geotagged audio samples paired with both low- and high-resolution satellite imagery. We demonstrate that our method outperforms the existing state-of-the-art on both GeoSound and the existing SoundingEarth dataset.
</p>
</div>
</div>
Expand Down Expand Up @@ -224,19 +224,12 @@ <h2 class="title">Satellite Image to Sound Retrieval</h2>
<div class="container is-max-desktop content">
<h2 class="title">BibTeX</h2>
<pre><code>@inproceedings{khanal2024psm,
annotation = {remote_sensing,spotlight},
title = {PSM: Learning Probabilistic Embeddings for Multi-scale Zero-Shot Soundscape Mapping},
author = {Khanal, Subash and Xing, Eric and Sastry, Srikumar and Dhakal, Aayush and Xiong, Zhexiao and Ahmad, Adeel and Jacobs, Nathan},
thumbnail = {/thumbnails/psm.jpg},
booktitle = {ACM Multimedia},
title = {{PSM}: Learning Probabilistic Embeddings for Multi-scale Zero-shot Soundscape Mapping},
author+an = {7=highlight},
pdf = {https://arxiv.org/pdf/2408.07050},
eprint = {2408.07050},
archiveprefix = {arXiv},
primaryclass = {cs.CV},
month = oct,
day = {28},
year = {2024}}</code></pre>
year = {2024},
month = nov,
booktitle = {Association for Computing Machinery Multimedia (ACM Multimedia)},
}</code></pre>
</div>
</section>
<!--End BibTex citation -->
Expand Down

0 comments on commit d9577e7

Please sign in to comment.