Producer/engineer and Waves product developer Yoad Nevo (Sia, Pet Shop Boys, Jem) discusses his contributions to the digital revolution in music production, and much more.
Yoad Nevo has been an in-demand mixer and producer as well as a digital audio guru for over 20 years. With numerous A-list production and mix credits and a major role at Waves Audio, he’s been playing an active role both on the musical and creative side as well as the technology front. I met up with him at his studios in London to talk about audio, plugins, music and his custom made Neve desk.
Hi Yoad. Thanks for having me over to chat about your work and contribution to the music industry. Can you tell me how you got started?
I started very young, at 17, by doing a year of engineering school in Tel Aviv. During that time, one of the biggest studios in the city contacted my school about looking for an assistant, and I was recommended for the job. A few months later I started assisting on recording sessions. During one of the sessions with a big artist, the engineer in charge had to leave, and the producer asked me if I could take over the session to record guitars. “No problem”, I said. In reality, I was shitting my pants, but I played it cool. I didn’t have a lot of experience with recording at that time, but since it was a guitar recording session, I knew I could get a sound I liked, as I’m a guitarist myself. I ended up engineering and co-producing the whole album after that session. It was a big album, which took a year to make. When it was over, the studio manager wanted me to go back to working as an assistant, but I considered myself an engineer at that point, even though I was only 19. So I left the studio to become a freelance engineer, and sat by the phone for a good few months, haha. But then things started to pick up and I got work in different studios. Israel is a small place, so you get to do everything: pop, rock, classical, world music, electronica. It was a good experience for me.
What was characteristic about Tel Aviv’s music scene when you started out, as opposed to what you later encountered in London?
In London and other big music scenes, it’s more natural to become an engineer that works mainly in one genre, like rock or hip-hop. But when I was younger, you had to be able to do everything. So I would be recording brass one day and then guitars on another, and then I started programming and producing. One of my first breakthroughs was with a band I produced, which became very successful. In addition to producing, I was a member of the band and co-wrote the songs. But I didn’t want to go on the road with them, preferring to stay in the studio, so I left before the album even came out.
You got royalty cheques from that though, right?
Haha, I wanted to leave so bad that I signed whatever their lawyers gave me… I don’t regret it anymore, but back then, I ended up looking for work whilst they were having success off the album I produced and co-wrote. That’s just how you learn, and it’s fine. But I’m glad that I chose to stay in the studio. I did end up getting a lot of work with big artists and by the time was 25, I’d done almost everything you could do as a mixer. In addition to that, I had assistants in different studios that would assist me with engineering. So I might engineer the rhythm section of a track, leave the studio, and they would do the overdubs. I could then come back later to mix everything. This way, I could work on four or five albums at the same time.
Did you get ear fatigue, doing that all day?
Not really. I never worked loud, and still monitor quietly, so it was never a problem.
You’ve said in past interviews that you had to teach yourself how to be an engineer as you went along. Is that true? How did you learn stuff that way? Wouldn’t you have to make mistakes at your client’s expense?
I sure did. Not too many, fortunately. At the end of the day, everything worked out and the labels and clients were happy, apparently, as they kept hiring me. I knew the desk and gear, so it was more about experimenting with mics. Those things stay with you for life. I don’t work long hours anymore, but if I had to work two days straight, I could do that, as I’ve done it so many times in the past.
Do you feel that working with Waves at such an early point helped your transition from analog to digital when the digital revolution happened?
Absolutely. Back then, digital audio was in its early days, and you couldn’t do much with it. You couldn’t run many real-time plugins at the same time. You had to process the audio and then bounce it. But even prior to working with Waves, I had already mixed an album in-the-box, in 1993. I had a Triple DAT system by Creamware, which could run one real-time plugin on the master bus. So I would have to solo a sound, insert the compressor on the master bus, bounce it, bring it back into the session and replace the original, I would then follow it up with an EQ on the master bus, and do that for each instrument. When I wanted to use a reverb, I would bring all the faders on the mixer down, insert a reverb on the master bus, and start pushing faders up. I wouldn’t be getting any dry signal, and I’d have to imagine how the instruments would be placed in the wet reverb. Then I’d bounce the all-wet reverb file, and import it back in as an audio track. If there was too much reverb on one sound, I’d do it again until I would get it right.
In 1998, when I moved to London, I was still working on analog desks, but also had 7 computers as a part of my setup, since you could only run a few plugins in real-time at that time. I had two Pro Tools systems, two Native Systems, a Creamware system, Giga Studio system and Sample Cell system and they’d all run together. So one computer would run all the plugins I used for guitars, and another one would be for drums, another for reverbs, samples, etc.
Having worked in both analog and digital realms, do you find that you prefer one sound over the other?
I prefer the sound of analog, though I appreciate the benefit of using plugins, which is great for processing individual words in a vocal, rendering reverbs, reversing things, etc. The functionality can sometimes outweigh the depth you get from analog gear. The way I work now is the best of both worlds, since I have my Neve console for summing, but still work mainly in-the-box, with the exception of Q-Clone, which allows me to use the desk EQ.
Working in-the-box also allows me to recall my mixes easily, all I have to do is line up my faders to unity gain. I always hated doing recalls on analog consoles in the past. It takes 2-3 hours and never sounds quite the same as the original mix. So my workflow has changed, and this is how the benefits of digital come into play. In the old days, I’d have two days to mix a song, whereas now I can work on a mix for a few hours, and then listen to it on different systems, at home, in the car, on headphones, which is where the real work comes in. I’d make comments to myself, and apply those comments by loading up the session in just 5 minutes and tweaking the mix. So I’m able to span my work over a period of a few days, but I end up spending roughly the same number of hours as before.
I see that you have a big synth collection here. Where did you get all this?
I just collected stuff over the years. I like my analog synths, but I’m a fan of digital synths too, which is why I enjoyed very much developing Element and Codex for Waves. I like how analog and digital domains interact. For example, our new wavetable synth, Codex, was created using my analog synths, which I used to create the wavetables. It’s not a sampler, since the technology allows you to use wavetables, allowing for more diverse manipulation, but you still have access to sounds that come from a MiniMoog, SH-101 or a Korg MS10.
Would it be fair to compare Waves’ Codex to Native Instruments’ Massive, since they’re both wavetable synths?
I love Massive. It’s a great tool, but there’s nothing to compare. The waveforms in Massive are still mostly classic waveforms, such as sine, square, etc. Also, Massive makes heavy use of artifacts in its wavetables, much like the classic wavetable synths of the 80s and 90s. In Codex, we strove to eliminate these artifacts. I’m not saying that artifacts are bad, but with Codex we took wavetable synthesis to the next level.
Interesting. Another Waves product you’ve worked on is the NLS. What was the process like for making that?
It was a very lengthy process. We had to figure what we wanted to capture from the desks, which involved running different test signals through them, and taking different measurements of the electronic components for modeling. So we did a lot of that sort of experimentation and R&D on [producer/engineer] Mark “Spike” Stent’s desk, and had it shipped to Tel Aviv for 6 months. Following that, when we had the process figured out, I sampled the 32 channels of my desk here in the studio. We did the same with [producer/engineer] Mike Hedges’ desk.
In regards to your own custom Neve desk, can you tell me why you haven’t chosen to contact Neve about making a new custom one, instead of using a desk from decades ago?
Because it’s a one of a kind desk. The biggest size Neve used to make of this specific model was 48 channels, and mine has 60 channels, since I had two desks merged together to make one big desk. The center section was taken from a desk in a film studio in LA, so the Master channel is 8-way, 7.1, which is great for me since I do a lot of surround mixing. It’s an old school, class-A, all transformer desk from 1981, and it’s the first Neve desk to have channel dynamics. So it’s unique, and sounds amazing.
I’ve often heard people talk about the “sound” of a desk, and how they might prefer an SSL over a Neve or vice versa. It’s quite an abstract concept for someone who doesn’t have much experience in the analog world. What are your thoughts on SSL’s sound versus Neve’s sound?
If I were to go back to mixing using an analog desk solely, I’d go for an SSL. My Neve desk doesn’t have recall built into it, which would make it unusable for that. Also, I prefer the SSL for sound sculpting, which is why I use the SSL plugins so much, but even then I’ll run it through the Neve desk to get the extra width and headroom. I also use the Neve for recording, and you can’t really compare the SSL pre-amps to the Neve ones, as the Neve mic pre’s have so much more headroom.
As far as why people prefer one desk over the other, it’s the same as guitars, one guy would say he loves hisStratocaster, and another would say he loves his Les Paul. There’s no contradiction in that.
GTR is another Waves plugin that you helped develop. I use it a lot, and have noticed that it has a sharp, present sound that cuts through the mix around 3 kHz.
This is something I had a lot to do with. I’m aware that some people may not like it but I wanted GTR to have the sound I would personally use when recording guitars, which has a lot of presence in the mids and highs. On GTR3 we went back to a more natural sound for the PRS models, so we ended up having both. When I record guitars, I do a lot of processing to the sound in a way that sits well in the mix.
What are some of the most creative ways you’ve used GTR?
I use it on vocals as well as on room mics for drums. The amp distortion in GTR is something you can’t get from a pedal or another plugin. We modeled the amps to be very responsive to different levels. Overdrive and pre-amp distortion is very sophisticated and responsive, and lends itself really well to drums, unlike the distortion you get from pedals, which is sonically close to digital clipping. So if you want to use GTR on drums or vocals, try taking the cabinets off and you’ll get an interesting result.
How do you feel about the fact that Waves has created a lasting legacy for helping to revolutionize the digital music world, whilst simultaneously equipping people with tools to abuse things, like using the L1 to destroy a mix?
Haha, I keep saying to Meir Shaashua, Waves’ co-founder who designed the L1 and L2, that we’re in a sense responsible for the level war. But it’s always like that with technology and art. When the technology is available, artists will abuse it, and then the art evolves as a result. In the early 80s, when the Yamaha NS-10s came out, it changed music. The punch you hear in 80s music comes from the fact that a lot of music from that period was mixed on those speakers. Same with the DX7s and RX11's. Not to mention 808s. The 808 was meant to be a virtual drummer, and what it turned to be was something completely different. So this always happens. The L1 and L2 changed music, for better or worse. When cameras became available, it changed art. Prior to that, painters would sit for hours creating realistic portraits, but then the camera made that obsolete. Now in the modern age, no-one wants realistic pictures, but would rather edit them with filters, etc.
What Waves plugins would you recommend to new producers/mixers?
It depends on what they’re making. If they want to be mixers, I’d say the SSL Bundle. If they want to produce, I’d still say SSL bundle, in addition to GTR3 and the new TG12345.
You’d recommend get both the SSL bundle and the TG12345? But they’re both desk emulations.
The TG12345 may be a channel plugin, but it does something entirely different, sound-wise. With the SSL bundle, you can mix and make things sound modern. With the TG, you can be creative and get more character out of it, because of its unique dynamics, drive and EQ sections.
How do you feel about TG12345 vs. REDD?
I prefer the TG, as it’s more versatile. I like the harmonics and the sound you get by running things through the REDD. It has a nice bass and treble boost too. But you can’t compare it to the TG12345 in terms of functionality. The TG has a compressor, 3-band EQ, parallel compression, etc.
When mixing, I’ve heard that you don’t use a lot of reverb, and that you turn to other things to create a sense of depth for your sounds. What kinds of “things” are those?
Mainly delays.
Won’t delays clutter things up in the mix?
It depends on the style of music that you’re mixing. I may have said that around five years ago, but things have changed since. Back then, things were a lot more in your face, and if you wanted depth, you would use a slap-delay, which creates depth without smearing transients, and avoids taking up too much headroom in the mix. Because delays aren’t diffused, you can make it a lot lower in the mix than reverb. It’s also about headroom. Even if you’re using a reverb which isn’t very present, it’s still going to take up a lot of headroom, since its frequency content is pretty wide. If you use delays on vocals, you can make it 20 dB quieter than reverb, and it works for added depth. Also, delays give you extra control of the groove on a song.
You mentioned in a MusicTech interview that when you were working with tape, you would sometimes hit tape hard to smear transients. Why do that?
Because it sounds good. Maybe not for EDM, but again, genres are evolving. It’s just easier to mix when you don’t have too many transients. Smeared transients also add to the depth that people talk about in analog gear. Having said that, life is too short to be recording to tape these days. I’d rather get on with other things, so I use tape emulation to achieve this effect.
You’ve also mentioned in past interviews how you’ve had sessions sent to you for mixing that have multiple plugins on different sounds, and your opinion is that each plugin added to a chain degrades the sound. Can you talk about that more?
I would tend to think that using more than two EQs serially isn’t beneficial for anything, unless you’re automating filters. And why use more than two compressors? I wouldn’t use more than two compressors on a vocal; each one does something specific, and I’d control their relationship. Over-using plugins makes things sound too digital, and the result is that the mix ‘stays in the speakers’ instead of ‘being in the room’. This is why I use analog gear as a reference. With analog, you don’t hear the speaker, but rather the presence of the sound that engulfs you. That’s a very important point of reference for mixing and recording. It’s like sitting in a room with a guitar amp; you don’t hear the amp/cabinet, you hear the sound of the instrument, and that’s what I’m looking for when I’m mixing. When I don’t get it, and hear only the speakers, I know something is wrong.
You’ve done a lot of webinars for Waves. Have there been any other ways that you’ve been sharing knowledge of audio and music?
Yes, my book, “Hit Record“, which is also going to see a digital release soon. I did some tours with Avid when Pro Tools 9 and 10 were released, and I occasionally give some masterclasses. My webinars have the most exposure though, and people comment very positively on them, so it appears that people have found them beneficial, which is great.
Wrapping up, can you tell me about what the future holds for you?
I’m working on a lot of different things, spanning from mixing to developing software, to writing and producing songs, mastering, making sample libraries, etc. I love doing it all. It keeps things interesting.