How to successfully localise video game dialogue
Keywords Studios continues 'A series unlocking modern game development'. This week: Audio
At Keywords Studios, we offer an array of audio services to game developers. Whether that's original voice-over production, localised script writing, localised voice-over recording, mastering, mixing, integration, sound design, music; these are all under one roof and, like music, are in harmony with one another .
Audio has seen rapid change in the past decade, moving from voice localisation to voice production, then to music and sound design - as this has happened, practices have transformed accordingly.
In this article I'd like to give you insight into how we craft our games' localised dialogue within the Keywords Audio Services environment. Below is an outline of our process, designed to have synergy between planning, capturing and editing the energy of our voice actors.
1. Starting off: get your localised script right
From the game's initial concept, script writers are tasked with mapping out localised dialogue to fit within the boundaries of the game's design - for example, during particular sequences or expositional moments. This is all done via an interactive software system which means, from a design perspective, it is not easy to predict the moment a player will experience what we have scripted and intended to be shown.
2. Assembling your voice actors: personality and consistency
Video games now have performances akin to movies. It is therefore essential to understand your character and their personality from the outset, as the lines you record will not be chronological. They'll take place within a particular moment in the story, framed with a particular set of emotions and therefore consistency between takes is essential.
To preserve the developer's vision we therefore capture all of the information pertaining to each character in a database (including their bio), so that the vision of the developer is retained and transferred accurately to in-territory professionals to record and localise effectively.
"Video games now have performances akin to movies. It is therefore essential to understand your character from the outset, as the lines you record will not be chronological"
3. Preventing your pipeline snowballing: getting the technical aspects right
To get things right at this preliminary stage, these elements also require us to consider two associated technical aspects. First, how the length of the localised target line should match the length of the original language line according to the game engine's requirements. This has a huge impact on our global pipeline, affecting the audio production process and budget.
Second, it is important to make the recording environment consistent from source to target audio. In other words, the line audio level, dynamic range and voice effects all need to match up from the original file to the one you are replicating, ensuring players receive the same impact the designers intended.
Coupled with a strong script, which reflects the constraints of the game's design, and an audio asset management system, this will greatly ease the integration of international assets back into the game build.
4. The audio information package
With voice-over work, it is critical that audio pre-production and recording is effectively done under one house. This keeps everyone at Keywords Studios in-sync with the other project components, preventing not only mistakes, but the repetition of work.
Thus like the character database, the script needs to contain all the information that anyone may require in their respective pipeline. For example, it should contain consistent assets for text and audio reference, your character name, audio file names and constraints etcetera.
This master script, alongside your character database and audio reference samples which drive the localised casting process, is referred to as your 'audio information package' with which you can start disseminating to your target territories to begin pre-production.
They will proceed in checking all assets, familiarizing themselves with the content. Starting with the character bio and the game's description, together with the script and related character breakdown, they will then move onto their local casting process and voice director selection.
5. Your localizers should always undergo a pre-check phase
In addition to the artistic aspects, we always look at how the character and their associated actor will impact the logistics, planning and budgeting of the localised recording. Thus having a good pre-check phase (with formal and content "text and audio" checks), is key to ensuring an easy recording process.
This includes adapting the text in the script, so that you don't need to pause recording and adjust any dialogue which doesn't fit the time allocated to it from the original recording schedule. This keeps a proper voice rhythm, avoiding the need for actors to artificially speed up their lines or even slow them down to match the original performance timing.
6. Following the recording
After recording localised dialogue, all assets need to be cleaned, edited, exported and named as per the references located in your master script and audio information package. Sound levels will need checking to ensure their loudness matches the source-level specification.
All target audio should be levelled (and eventually mastered) so that no lines will be too loud or too low, especially in the case that the dialogue will need to be mixed with additional sound and music.
This is also the moment to implement Voice EFX if required. But we are always aware this can affect previous levelling (extra compression) and timing (reverb/echo) of the target audio files. This matters because we also may need to deliver dialogue embedded in a single file and/or cutscene, with or without EFX and music.
Reviewing and checking of all of these elements for the audio assets enables us to generate your 'as recorded' localised dialogue, meaning everything is ready to be incorporated into the game engine.
Again, managing multilingual assets is dependent on ensuring that all of the steps you have taken are consistent. This is perhaps the most critical stage to facilitate localised audio integration into the final build; a final check for asset consistency between the ultimate mastered source audio (that could have changed in the meanwhile) and the final localised target files. One suggestion I can make is to use an audio levelling tool, which is able to check large amounts of source/target data, whilst also performing a final asset consistency check.
Once these final checks have been completed, we are ready to hear the 'song' we have created together.
"This is the second in a series of pieces focusing on development insight from Keywords Studios. For more information on their services, including Audio visit https://www.keywordsstudios.com/
Based in Milan, Italy, Andrea Ballista is the Audio Service Line Director at Keywords Studios. Responsible for overseeing the efforts of multiple studios across the world, Andrea has over two decades of expertise in devising audio solutions for a wide range of major clients in the video game and wider entertainment industry. Previously at Binari Sonori, an Italian localisation provider he co-founded in 1994, Andrea led the studio to become a leader in the localisation market, offering an array of audio services that encompasses casting, dubbing pre- and post-production and a services network that has global reach.