Skip to content

Using with MonoGame

Coloride edited this page Jun 16, 2024 · 22 revisions

Installation

Installation the NuGet package

To install OpenVoiceSharp via NuGet, check here.

Once that's done, you will notice that you can now use OpenVoiceSharp. However, you will also notice upon running that your game/app will crash due to the fact the libraries are not in the folder.

Linking DLL files

First of, install the OpenVoiceSharp package on your MonoGame project.

To link the OpenVoiceSharp DLL files, create a dependency (linker) project and copy the required DLLs in the project's folder.

Note

For simplicity's sake and for the Unity example aswell, I used the Steamworks SDK for the networking via the Facepunch.Steamworks package. The following behavior and networking system can of course be customized according to your needs. To link the DLLs click here. Use the Steamworks 1.48a version of the SDK and the test steam app id 480 (Spacewar).

Usage

Now that you've linked the DLL files, lets start programming!

To begin, create a MonoGame project if you havent. Once that's done, we can now head into recording and sending.

Recording & Sending

Here you have two choices. You can either use the native MonoGame Microphone's GetData() functions and BufferAvailable event or use the BasicMicrophoneRecorder class.

No format conversion is needed (as its 16-bit PCM) but resampling is needed as it takes in the native microphone sample rate and it forces you to record a buffer of atleast 100 ms, which could introduce latency or other constraints. Which is why I recommend you use the BasicMicrophoneRecorder class to do this job.

The following samples are going to come from Game1.cs.

// microphone rec
MicrophoneRecorder.DataAvailable += (pcmData, length) => {
    // if not connected or not talking, ignore
    if (!Connected) return;
    if (!VoiceChatInterface.IsSpeaking(pcmData)) return;

    // encode the audio data and apply noise suppression.
    (byte[] encodedData, int encodedLength) = VoiceChatInterface.SubmitAudioData(pcmData, length);

    // send packet to everyone (P2P)
    foreach (SteamId steamId in Profiles.Keys)
        SteamNetworking.SendP2PPacket(steamId, encodedData, encodedLength, 0, P2PSend.Reliable);
};
MicrophoneRecorder.StartRecording();

Playing back

Tip

Like I say for any other engine or app, I highly advise you use another thread than the main thread to do this for performance. But because this is meant to be a barebones boilerplate for you to implement, I have not considered crucial performance in mind. However, the following example should be fast and efficient enough for most CPUs to run on the main thread.

The workflow should look a little bit like this:

Profile class containg Steam user
|_ Friend for the Steam user instance
|_ DynamicSoundEffectInstance per user
|_ Texture2D for the avatar

image

The Profile class contains those basic properties but they can be customized by your own class or workflow. The Profile.cs sample contains also a basic asynchronous way of loading the steam avatar into a Texture2D instance.

Lets cover this via bullet points:

Creating the audio playbacks

To play back the audio we receive, we have 3 major questions to answer:

  1. How do I stream audio in MonoGame?
  2. How do I supply PCM samples (voice data)?
  3. How do I play back the samples?

For the first question, XNA blessed us with the DynamicSoundEffectInstance class, that allows for automatic playback by supplying 16-bit PCM samples, which means no conversion is needed!

First, we need to create a the sound effect instance and set it to the correct sample rate and channel type.

public DynamicSoundEffectInstance SoundEffectInstance = new(VoiceChatInterface.SampleRate, AudioChannels.Stereo);

Make sure to play the sound whenever the instance is created to automatically start buffering the incoming audio.

public Profile(Friend member) {
    SteamMember = member;

    SoundEffectInstance.Volume = 1.0f;
    SoundEffectInstance.Play();
}

The Profile containing this information and the sound effect instance is created automatically when a player joins the lobby and when you join the lobby (to sync previous players and allocate their sound effect instance).

Profile management:

// profile
private Profile GetProfile(SteamId steamId) => Profiles[steamId];
private async Task CreateProfile(Friend friend)
{
    // avoid creating clones
    if (Profiles.ContainsKey(friend.Id)) return;

    Profile profile = new(friend);
    await profile.LoadAvatar();

    Profiles.Add(friend.Id, profile);
}
public void DeleteProfile(SteamId steamId)
{
    Profiles.Remove(steamId);
}

Steam events management:

// steam events
SteamMatchmaking.OnLobbyMemberJoined += async (lobby, friend) =>
{
    await CreateProfile(friend);
};
SteamMatchmaking.OnLobbyMemberLeave += (lobby, friend) =>
{
    DeleteProfile(friend.Id);
};

SteamMatchmaking.OnLobbyEntered += async (joinedLobby) =>
{
    Status = "Connected";

    // set to current lobby
    Lobby = joinedLobby;

    // setup
    await SetupLobby();

    Action = "stop hosting";

    Connected = true;
};

Tip

SetupLobby(); handles creating the profiles of the people that were before us in the lobby.

See Game1.cs for more details.

Decode the data and playback

Great, we have prepared our streams to feed them voice data so to answer the second question, once we've created our sound effect instances to handle the playback we can use the SubmitBuffer function to submit 16-bit PCM buffer once we have our instance.

void HandleMessageFrom(SteamId steamid, byte[] data)
{
    if (steamid == SteamClient.SteamId || !Profiles.ContainsKey(steamid)) return;

    // decode data
    (byte[] decodedData, int decodedLength) = VoiceChatInterface.WhenDataReceived(data, data.Length);

    // push to sound effect instance buffer
    GetProfile(steamid).SoundEffectInstance.SubmitBuffer(decodedData);
}

And that's it!

Toggling Noise Suppression

Sadly, MonoGame does not have a native way of handling real time audio effects as of now.

But for noise suppression, it is extremely straight forward:

if (keyboardState.IsKeyDown(Keys.F) && !BusyTogglingNoiseSuppression)
{
    Task.Run(async () =>
    {
        BusyTogglingNoiseSuppression = true;
        
        // invert
        VoiceChatInterface.EnableNoiseSuppression = !VoiceChatInterface.EnableNoiseSuppression;

        // cooldown
        await Task.Delay(700);
        BusyTogglingNoiseSuppression = false;
    });
}

Here I have a small cooldown system in place, but all you have to do is just toggle EnableNoiseSuppression from the VoiceChatInterface on or off.

Demo & conclusion

Desktop.2024.04.21.-.15.08.07.03.mp4

Demo using a soundboard, but voice can be used aswell.

There you go! You should now be able to have fully integrated OpenVoiceSharp in no time. Remember to check the barebone and example project in case you're lost here.

Clone this wiki locally