Multithreaded Audio Engine
Published:
An audio engine layer abstraction on top of Windows XAudio2 audio API.
XAudio2 is a low-level audio API. This software creates a more usable, multi-threaded library to interact with XAudio2, implementing the Actor model approach.
Walk-through
The Experience
This project was the result of 20+ hours per week of work in a 10 week multithreaded architecture course at DePaul University. And to date (11/20), is the project I am most proud of. As it was for a class, we were restricted to only the most basic threading tools, mutexes. And with that, were to focus on creating a clean architecture, considerate of the fact we were programming an interface for programmers to use as a extension of a game engine.
The Architecture
Communication between threads
- I implemented the Actor model, through the use of circular queues, CircularData, passing commands between my AudioEngineThread, the Game engine thread, and the FileThread.
Handle System
- A handle system was also needed for this project, using handles to protect Sound objects owned by the user and the WaveSound objects owned by the audio engine. This projects these objects from being accessed when in a invalid state (i.e. resources have been released).
User Interface
- A design choice I am proud of in this project, is that the user has restricted access to the real Sound resource, forced to interact with it through a GameSound object returned by the SoundManager. In addition, all interactions with the audio engine are done through the SoundManager or the GameSound.
Documentation
Contents
- 1 The SoundManager
- 2 Loading Wav Resources
- 3 Creating Playlists
- 4 Initializing a Sound Object
- 5 Interacting with a GameSound
- 6 Callbacks
- 7 Sound Commands
- 8 Miscellaneous
1 - The SoundManager
- Singleton Sound Manager class – The main interface for the user to interact with the audio engine.
1.1 - Initializing the Audio Engine
The AudioEngine must be initialized before any other interactions with the SoundManager.
1.2 - Uninitializing the Audio Engine
The AudioEngine must be closed before the game application.
1.3 - Shutting Down the Audio Engine
The AudioEngine can also be shutdown by the user.
2 - Loading Wav Resources
- This engine uses the XAudio2 Windows API, given a path to a .wav file and a SoundID enum to associate with it, a WaveSound is created in the audio engine, referencable by the SoundID.
Wave Resources must be loaded into the audio engine before referenced. A custom FileCallback can be given to SoundManager::LoadWaveResource as a way to attach custom code to be executed when the Wave Resource is loaded.
3 - Creating Playlists
- After loading all needed wave resources, a playlist can be created to chain together WaveSound’s.
A playlist is created by supplying the total count and SoundID of each sound to SoundManager::CreateGamePlaylist as well as associating it to a PlaylistID or string.
4 - Initializing a Sound Object
- After loading all the wave resource, a Sound can be created to play an manipulate a particular WaveSound.
A reference to a sound handle is created by supplying the SoundID or PlaylistID to SoundManager::InitializeSound. This function must also be supplied a GameSound as an out parameter, used to give access to a Sound.
5 - Interacting with a GameSound
- After initializing your GameSound with a active Sound handle through SoundManager::InitializeSound, the associated WaveSound can be played and manipulated through the GameSound object member functions.
Numerous actions can be called on a sound, including:
- play
- pause
- stop
- set volume
- ramp the volume over time
- set pan
- change pan over time
- set pitch
- get current playtime
- add scripts
6 - Callbacks
- There are 2 places where a callback can be supplied, when initializing a WaveSound or a Sound.
6.1 - FileCallbacks
A FileCallback can be supplied when intializing a WaveSound. This enables the user to execute custom code when the following WaveSound load events occur:
- wave already loaded
- wave load error
- wave loaded
6.2 - SoundCallbacks
A SoundCallback can be supplied when intializing a Sound. This enables the user to execute custom code when the following Sound events occur:
- sound stopped
- sound ended
- sound released
- sound played
- sound paused
- sound resumed
- sound killed.
6.3 - Creating Callbacks
A custom SoundCallback or FileCallback can be created by simply defining a class which publically inherits from either and overriding the relevant methods.
7 - Sound Commands
- A sound command is used to trigger a sound to play or an action on a particular sound. There are 2 types of sound commands, a GameCommand and a SoundScript.
7.1 - Creating GameCommands
The first derived sound command is a GameCommand, this allows for execution of a command with user supplied code, at a given delta time. This is done by calling SoundManager::CreateTimeEvent, supplying a delta time for execution as well as a derived GameCommand, initialized through RAII.
7.2 - Creating SoundScripts
The second derived sound command is a SoundScript, this allows for execution of a command with user supplied code, at a given delta time on a particular GameSound. This is done by calling GameSound::AddScript, supplying a delta time for execution as well as a derived SoundScript, initialized through RAII.
8 - Miscellaneous
8.1 - The Sound Priority Table
A sound priority table has been implemented which restricts the number of sounds allowed to play at once to a set, configurable amount. A Sound is given a priority number on initialization, determining its rank in the table. If a new Sound is played while the priority table is full, the lowest priority Sound is “killed”.