In the simplest case, the sounds XML file is just a list of
elements. Here's an example:
<?xml version="1.0"?> <sounds> <sound name="player_sudden_pain" url="" volume="1.0" min_gain="0.0" max_gain="1.0" stream="false" priority="0.5" /> <!-- And more <sound> elements... --> <sound name="test_sound_1" /> <sound name="test_sound_2" /> <sound name="test_sound_3" /> </sounds>
<sound> can have the following attributes:
name(the only require attribute; string).
This is a unique sound name.
The URL (can be just a simple filename) from which to load sound data.
If you don't specify it, we will guess URL looking at
and appending various extensions.
Right now we will try looking for
volume (deprecated name:
gain) (float, in range 0..infinity)
Volume. How loud the sound is.
Use this to indicate that e.g. a plane engine is louder than a mouse squeak (when heard from the same distance).
Note: Do not make the actual sound data (in wav, ogg and such files)
louder/more silent for this purpose.
This is usually bad for sound quality. Instead, keep your sound data
at max loudness (normalized), and use this
to scale sound.
It can be anything from 0 to +infinity. The default is 1. Note that values > 1 are allowed, but some sound backends (like OpenAL) may clip the resulting sound volume (after all spatial calculations are be done) to 1.0.
min_gain(float, in range 0..1)
Force a minimum sound loudness, despite what volume would be calculated by the spatialization. This can be used to force sound to be audible, even when it's far away from the listener.
It must be in [0, 1] range. By default it is 0.
max_gain(float, in range 0..1)
Force a maximum sound loudness, despite what volume would be calculated by the spatialization. This can be used to limit sound volume, regardless of the distance attenuation calculation.
It must be in [0, 1] range. By default it is 1.
Play sound using streaming. This means that the sound is gradually decompressed in memory, which means that loading time is much smaller, although there may be a small overhead on CPU during playback. This is usually a good idea for longer sounds, e.g. music tracks. See news post about streaming for more details.
priority(float, in range 0..1)
How important the sound is. Influences what happens when we have a lot of sounds playing at once, and we need to stop some of them (we cannot have too many sounds playing at once, as then the cost of mixing would be significant, and human user cannot distinguish too many simultaneous sounds anyway). Larger priority increases the chance that the sound will keep playing.
By default it is
Deprecated alternative to specify
Migrate to using
You can arrange sounds in groups using the
Sound group is like a directory of sound files.
Sounds within a group named
"fight" must be played using a qualified
This way sound names must only be unique within their group.
Sound group can also (optionally) correspond to an actual subdirectory
of sound files, if it has a
In this case, all the sounds inside are searched within that subdirectory.
<sound> with name
is by default opened from file
drum_beat.wav. But if it's in a group
then it's actually opened from filename
You can make groups within groups (just like directories in directories).
We also support aliases.
Alias allows to define a sound name that refers to another
<alias> may be placed within a
Both alias names, and target names, are automatically qualified by the group name.
Here's an example:
<?xml version="1.0"?> <sounds> <sound name="test_sound_1" /> <sound name="test_sound_2" /> <sound name="test_sound_3" /> <group name="fight" subdirectory="fight"> <sound name="drum_beat" /> <sound name="drum_ending" /> </group> <alias name="alternative_name_for_test_sound_1"> <target name="test_sound_1" /> </alias> <alias name="alternative_name_for_fight_drum_beat"> <target name="fight/drum_beat" /> </alias> </sounds>
Right now we support OggVorbis and (uncompressed) WAV files.
A general advice when creating sounds is to keep them normalized, which means "as loud as possible". It doesn't matter if you record a mouse squeak or a plane engine, the sound file should be equally loud. This allows to have best quality sound.
Scale the sound by changing the
in sound configuration.
If sound is supposed to be spatialized (i.e. played by Sound3D method), make sure it is mono. Some OpenAL implementations never spatialize stereo sounds.
Specifically when making footsteps sound: synchronize it's duration with the HeadBobbingTime, which you can set for example using player XML configuration file (if your game loads it). By default it's 0.5, which means your footsteps sound should be around half-second long. Or you can record 2 footsteps and make it 1-second long.
The important thing is to synchronize these times, to make them feel right in the game — visuals (head bobbing) should match what you hear (footsteps sound).
Copyright Michalis Kamburelis and other Castle Game Engine developers.
Thank you to Paweł Wojciechowicz from Cat-astrophe Games for various graphics.
This documentation is also open-source and you can even redistribute it on open-source terms.