By: Izumi Hasegawa September 12, 2017
The hit video game franchise, Uncharted recently debuted their new video adventure, Uncharted: The Lost Legacy to fans worldwide. If that wasn’t exciting enough, each country once again received a version in their own language. Audio engineer, Ayako Yamauchi was one of the key people who made this possible and the amazing part is some of the versions were done in languages she doesn’t even speak!
We sat down with Yamauchi exclusively to talk about how this was even possible and what goes into her incredibly talented work.
Q: What was your role in creating Uncharted: The Lost Legacy?
My role was dialogue mastering for the game’s foreign language release. I worked for the Dutch, Italian, Arabic, and French translations of the games’ voiceover dialogue. The skill set required in the project was a combination of dialogue editing and audio mastering, two aspects of audio that I spent years studying. I’ve worked for mastering studios in the past (Dave Collins Mastering and M-WORKS Mastering) and gained knowledge what engineer needs to take care when we make sounds loud throughout the work.
For Uncharted: The Lost Legacy, the studio received 12 different languages for voiceover dialogue, each language with 10,000 audio files to be mastered. We had very tight deadline and I needed to make a quick and right judgment for the process. My work experience at mastering studios was a huge benefit for this endeavor because I learned different ways of editing an audio file. For example, when I need to make an audio file louder, it’s not as simple as cranking up the volume like you would turn up the volume of a TV show on your television. When making an audio file louder, there are parts of the sound’s frequency range, especially around 4-5kHz, that become harsh for a game player to hear. This is where we deal with what is called “psychoacoustics” and making sure that all of the audio in a video game, or film, is interpreted correctly by anyone else listening. These kinds of problems are sometime not so much obvious while recording with a voice actor. In the recording studio, a recording engineer, voice actor or voice actress, and voice director are all needed to focus just on the performance of a line of dialogue, not the final quality of the sound. Therefore, my role takes places outside of the recording studio, where I make the dialogue sound clean and consistent with the rest of the audio for the game before the game’s audio is finalized. I used tools on my computer such as an equalizer, compressor (limiter) and de-esser. Also I use iZotope RX to clean up any noise found in a dialogue line’s recording. For the variety of languages I had to take care of, it’s critical to make sure that they are all understood by their respective demographic. People in the Netherlands speak English and German fluently in general, so they can enjoy a film with either languages. However, in video games, the majority of players are usually kids or young adults. So, we need to localize a game into more languages than films/TV shows. After the localization process, other engineers in the studio will implement these audio files into audio middleware and mix the sound, placing all of the sounds directly in the game. All of these sounds, I had to prepare for the game as well so that implementation can be a smooth process.
Q: You have worked on other game projects as well. What was the biggest difference of this project from your other projects?
Back in 2014, I had a chance to work for a mobile game called Gooroo. I also completed voiceover editing for Star Wars: The Old Republic – Knights of the Eternal Throne. The trailer of Bandai Namco’s Tales of Berseria, viewed by almost 3 million people, is also part of my past work. The biggest difference is working as a part of the team and working for myself. When working with a team of audio engineers and sound designers, I do not often take different roles, and usually I am able to focus only on a single task. The audio supervisor, or director, of each project may divide some of the work to editors. I could take a critical part, but a lead supervisor or mixer usually does the final check of a game’s audio. On the other hand, when working by myself with my own clients, I often take multiple roles. Usually those are short-term projects, such as promotional trailers. I do all of the audio editing, mixing, and mastering. Sometime, I am asked to go to recording studios to record voiceover and/or sound effects. I often hear client feedback directly when they want me to make revisions. People have their own preference for the sound in motion picture, and there is never a single right answer for how a game or film should sound, but I work hard in trying to get to that answer all the time.
Q: Besides video games, I understand you have also worked on television and films as a sound engineer. What is the biggest differences between those for a sound engineer? How did you acquire the skill set?
The skills to create sound are similar for films and games. Feature films require me to create very detailed and layered sound effects that audience can enjoy at theater. Even if the sounds are exaggerated from what they would sound like in real life, they are meant to make the movie as entertaining as possible. TV shows have very tight deadlines compared to films, and need to have consistent sound quality in each episode. The entire process for a show is usually done within the same digital audio workstation (DAW), such as Pro Tools. Picture editors can export video files and production audio files so that the audio team can import those files into Pro Tools. Pro Tools has been the industry-standard DAW for many years. Engineers and studio staffs can exchange files to work with the same project using Pro Tools, making work fast and efficient. For, interactive media such as video games, we cannot mix within the same DAW. After the sound designer and dialogue editor bring us the sound effects and voiceover, the engineers in the team need to implement those sounds into audio middleware. Middleware is the tool that communicates between the audio team and the programming team. Wwise, FMOD, and CRIware are commonly used third-party middleware programs, but sometimes, a game company might use their own middleware program. I learned my own game audio workflow after I graduated from Berklee College of Music while I was working for TV shows and films as a sound editor. I studied this method at Berklee Online with Emmy-nominated composer/sound designer Gina Zdanowicz. Also this year, I had a chance to work for CRIware at the Game Developers Conference as an audio demonstrator. This allowed me to demonstrate to developers from around the world how to integrate audio into their own game engines efficiently.
Uncharted: The Lost Legacy was released August 22nd.
For more info, visit their official Facebook page at,
https://www.facebook.com/UnchartedTheLostLegacy/
Interview by Izumi Hasegawa – @HNW_Izumi
Edited by: Jody Taylor – @RealJodyTaylor
Follow Us: What’s Up Hollywood at @WhatsUpHWood