Skip to content

What would it take to make WebAssembly audio inputs happen? #813

@austintheriot

Description

@austintheriot

Hello! First off, thanks for your work on this library! cpal has been incredibly useful to me as a web/Rust dev.

Currently, there are // TODOs in place of all input-related coded for the WebAudio host:

I'm wondering, what would it take to make input available on the web? Fundamentally, there appears to be an API mismatch in that cpal requires synchronous configuration, whereas the web requires async setup via getUserMedia(). Would it be possible to force a synchronous API here, where the input is basically just a plain buffer that you can write into via a media stream that is set up after the fact?

I'm interested, because I'm working on a cross-platform audio node library, and I'd like to support web compilation as a high-priority compilation target:

Code: https://github.com/austintheriot/resonix
Demo: https://austintheriot.github.io/resonix/

So far, I've been working with pre-recorded audio primarily, but I'd love to support microphone access directly.

Would it make more sense to create this type of API user-side rather than library-side? I suppose I could potentially write microphone data to a plain buffer and surface that as if it were an audio node, but I'm not sure if the web audio API gives raw audio data like that or not.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions