Vocal processing and mixing. Vocal recording and mixing

Paulina Steel,

Sound designer.

After all stages of vocal editing have been completed, you can proceed directly to mixing it with a minus. One of the main aspects of mixing, be it a multitrack or mixing a voice with a soundtrack, is equalization. A sound engineer working with this instrument can either successfully make his mix harmonious and “tasty” with skillful work, or destroy the good things that were recorded and aggravate recording defects.

You must understand that historically the equalizer was created as a device for eliminating recording defects, so it is paramount to analyze your phonogram and vocal track for defects in the frequency response (amplitude-frequency response).

Nowadays, when almost everyone has access to recording on fairly good microphones, we cannot talk about recording defects, such as we had in the past. However, each microphone has its own sound, one can “overstate”, another can give an excessive nasal sound to the vocalist, etc. We can perceive any “too much” as a moment worthy of adjustment. Of course, here the question may arise about the acoustics being monitored. Of course, acoustics are also different, and one may overestimate, another underestimate, but in this article we mean that our reader is mixing on professional monitor speakers and should examine the recording based on comparison with other recordings and using his musical experience on this acoustics .

You should also remember that using an equalizer can completely change the character of an instrument or vocal. The task of the sound engineer is again considered to be primarily not to harm, but, if possible, to improve the sound, while leaving the person humane and alive, without distorting his characteristic features of performance and, if possible, emphasizing them. The vocalist is in most cases the main character of the composition, and his EQ should be approached accordingly. We can immediately say that any singer does not have a useful signal in the sub-low frequency spectrum, that is, at least up to 40 Hz, and perhaps this threshold is much higher. Therefore, at these frequencies it is possible to use a cutoff filter or at least a shelf filter, if there is a danger of cutting off the bass of male vocals too much. Of the possible most common problems, this is the problem with the “vocalist from under the pillow”, when a nasal overtone is clearly heard . One of the most common mistakes in solving this problem is to raise the frequencies of the upper mids and tops with an equalizer. With this approach, there is a risk of making the vocalist unnatural, electronic, especially if the mixing is carried out using digital equalizers. Before making such a decision, you should try, on the contrary, lowering the vocals frequencies of the lower mids. After this, it is possible that a rise in the upper spectrum will not be required, or the rise will not be so significant. We do not indicate exact frequency values, because there are no universal approaches to equalizing vocals due to the individuality of each performer and the different technical characteristics of microphones and the equalizers themselves.
Vocals for a person from the position of music are the musical instrument with which we are most familiar, we hear it more often than all other instruments, and our auditory experience in relation to the voice is the richest. Based on this, voice equalization should be especially clear, soft and almost imperceptible. You should definitely check your work by switching the equalizer to bypass mode in order to understand the difference between what was and what has become and whether it has become better. And here the situation is the same as when editing, if there is uncertainty that the equalization has benefited the vocals, it is better to work on it some more, listen to it on different acoustics, or abandon it altogether. An equalizer can also serve as a means of building a mix space in the case of vocal - a means of indicating the location of the vocalist in the space of the mix. When you build this location, you need to do a stylistic analysis of the composition as a whole and ask yourself the following questions: - what style does this composition belong to?

In what room are compositions of this style actually performed?

What is the position of the vocalist in relation to the musicians, typical for compositions of this style?

What is the position of the vocalist in relation to the audience, characteristic of compositions of this style?

Using an equalizer, you can bring the vocalist closer to the viewer (listener) by raising the upper and lower frequencies, or “drown” him into the phonogram due to the exact opposite actions; you can make a voice in a retro style, a little telephone, by raising the level of the upper middle approximately in the range from 2k to 3k etc.

Therefore, if you correctly answer all the questions posed above, you will be able to make equalization, both characteristic of the style of a particular composition, and deviate from the usual rules of a particular style. Thus, we know that rock performers are characterized by the closeness of the musicians to the vocalist, but nevertheless he is the front man, that is, he still stands a little closer to the audience than a group of musicians. And, for example, in old-school pop music, the vocalist himself is usually alone on stage, because he is a “star”, and the distance from him to the audience is much less than the distance from the musicians accompanying him.

As for the equalization of the phonogram. Most often, a phonogram is a work already completed and prepared, as it were, for vocals, and therefore additional interference with it by an equalizer is in most cases unnecessary. But remembering the individual characteristics of the vocalist’s performance and the fact that now we are often faced with disadvantages of not very good quality, we leave the possibility of some equalization here too. In particular, it is clearly noticeable if the phonogram as a whole lacks high or low frequencies, etc., but what should you do if the vocals conflict with the phonogram itself and do not fit into it? Based on the fact that the main operating frequencies of the voice are usually in the range from 1.5k to 2.5k, you can make a slight decrease (a couple of dB) in the negative in the same range, of course, selecting the exact frequencies taking into account the properties of the voice of a particular vocalist . But you shouldn’t conclude that you should completely try to adjust the phonogram to the vocals using the equalizer. If your voice still doesn’t fit with the instrumental, you need to check again and try changing the equalization of the voice itself.

Thus, it is clear that in relation to vocals, an equalizer acts not only as a device capable of eliminating recording defects, but also as a means of creating a spatial environment for the mix, as well as a tool for creating an artistic image, and its use sometimes requires not so much technical as creative and even philosophical approach.

Continuation...

Panasonic and the Russian Railways Museum

Vladimir Dunkovich: Stage mechanics control systems.

Synchronization. New level of show. OSC for the show

Maxim Korotkov about the realities with MAX\MAX Productions

Konstantin Gerasimov: design is technology

Alexey Belov: The main one in our club is a musician

Robert Boym: I am grateful to Moscow and Russia - my work is listened to and understood here


pdf "Showmasters" No. 3 2018 (94)

Four concerts from one console at the Munich Philharmonic Gasteig

20 years of Universal Acoustics: a story with continuation

Astera wireless solutions on the Russian market

OKNO-AUDIO and seven stadiums

Ilya Lukashev about sound engineering

Simple Way Ground Safety - safety on stage

Alexander Fadeev: the path of a beginning lighting artist

What is a rider and how to compose it

Stupid way to process a barrel

pdf "Showmasters" No. 2 2018

Panasonic at the Jewish Museum and Tolerance Center

Concerts "BI-2" with orchestra: traveling gothic

Dmitry Kudinov: a happy professional

Sound engineers Vladislav Cherednichenko and Lev Rebrin

Lights on Ivan Dorn's "OTD" tour

Ani Lorak’s show “Diva”: Ilya Piotrovsky, Alexander Manzenko, Roman Vakulyuk,

Andrey Shilov. Rental as a business

The Matrex social and business center in Skolkovo will rightfully become one of the new symbols of Moscow, not only in the architectural, but also in the technical aspect. The latest multimedia systems and solutions that are ahead of their time make Matrex unique.

The Matrex social and business center in Skolkovo will rightfully become one of the new symbols of Moscow, not only in the architectural, but also in the technical aspect. The latest multimedia systems and solutions that are ahead of their time make Matrex unique.

Everything I know I learned on my own. I read, observed, tried, experimented, made mistakes, remade again. Nobody taught me. At that time in Lithuania there were no special educational institutions that would teach how to work with lighting equipment. In general, I believe that this cannot be learned. To become a lighting designer, you need to have something like that “inside” from the very beginning. You can learn to work with the remote control, programming, you can learn all the technical characteristics, but you cannot learn to create.

The Matrex social and business center in Skolkovo will rightfully become one of the new symbols of Moscow, not only in the architectural, but also in the technical aspect. The latest multimedia systems and solutions that are ahead of their time make Matrex unique.

The new design possibilities for active rooms should not be confused with the 'assisted reverberation' used at the Royal Festival Hall and later Limehouse Studios from the 1950s. These were systems that used tunable resonators and multi-channel amplifiers to distribute natural resonances to the desired part of the room.

their results are below. Participants of the “Show Technology Rentals Club” actively discussed this topic.
We offered to answer several questions to specialists who have been in our business for many years,
and their opinion will certainly be interesting to our readers.

Andrey Shilov: “Speaking at the 12th winter conference of rental companies in Samara, in my report I shared with the audience a problem that has been greatly troubling me for the last 3-4 years. My empirical research into the rental market led to disappointing conclusions about a catastrophic drop in labor productivity in this industry ". And in my report, I drew the attention of company owners to this problem as the most important threat to their business. My theses raised a large number of questions and a long discussion on forums on social networks."

Making vocals perfect is difficult, but there are plugins that can help you achieve a more transparent and balanced sound.
We have selected the 5 best for vocal cleansing and correction.

1. iZotope Nectar 2

Nectar 2 is a vocal processing station with built-in presets and a convenient audio interface. In addition to the classic tools such as de-essing, gate, compressor and EQ that form the main vocal processing, it can also mix in reverb, saturation, delay and much more.

2. Revoice Pro

Revoice Pro is used by virtually all top producers to quickly and accurately adjust the pitch, duration, and balance of voices and instruments.

This plugin includes many features that are simply indispensable for mixing both solo parts and backing vocals.

Celemony was one of the first to create a flexible track editing program that works well for both vocals and live instruments. It is perhaps the most comprehensive editing tool ever created. Naturally, this program does not perform miracles, and it is still important for vocalists to sing as well as possible in order to fix this painlessly.

This is not a plugin, but a bundle that Waves often provides at a discount.

This kit includes cool plugins for vocal processing, such as: DeBreath, Doubler, Renaissance Axx, Renaissance Channel, Renaissance DeEsser and Waves Tune. There is everything here to edit the pitch of notes, remove whistles and saturate properly.

If you want to process vocals professionally, this package of plugins will meet all your expectations.

Another plugin from Izotope that needs to be highlighted. This time it's a highly specialized audio editor for cleaning and improving recordings. Using graphical tools, you can identify the problem area and fix it as harmlessly as possible. This could be an explosive sound, a car honking in the background, or anything else that is difficult to remove using standard methods. Here you can suppress "internal" sounds and unpleasant artifacts. This plugin is suitable for those who process recorded materials, such as seminars, video reviews and TV shows.

If I had to pick a question that I get asked most often, it would most likely be, “How do you mix rap vocals?” Well, or variations of this question. I get asked about mixing rap vocals at least once a week.

I mix rap vocals 4 or 5 times a week. And if you consider that several voices can be heard in one song, then even more. I have already formed a certain approach to rap mixing. Everyone knows that no two songs, vocalists, recordings or instrumentals are the same. Therefore, there are a huge number of approaches to mixing, and my method of mixing rap vocals is only one of many.

Concept

It all starts with a concept. I have always said this and will say it again. Before you start mixing, you should have an idea of ​​the final picture. There needs to be some understanding of how the vocals will sit in the mix before you start mixing them. This idea, quite possibly, will change in the process, but you still need to start with some kind of developed direction of movement.

The main problem people have when mixing rap vocals is that they focus on the word “vocals” rather than the word “rap”. In general, the word “rap” is also very general: there is a huge difference between rap from New York in 1994 and rap from Los Angeles in 2010.

Even within the same school of rap, there are differences. For example, let's compare two tracks:

  1. A Tribe Called Quest – “1nce Again”
  2. LL Cool J – "Loungin'"

Both songs can be classified as calm, light rap, but the principles of mixing these two tracks are radically different.

The track “Loungin’” is the quintessential Bad Boys style. It was mixed by Rich Travali - you can hear some similarities with tracks from the likes of 112, Total, Mariah Carey and the late Biggie.

The track “1nce Again” is an example of mixing by Bob Power. This sound largely dominated the early New York rap market.

I'm linking to these tracks because I hope you're able to hear the difference in the mix. Notice how on “Loungin’” the vocals are elevated above the rest of the mix. They are at the same volume as the snare, have a sunny sound with soft highs, and sound very clear and detailed in the mids.

Meanwhile, on “1nce again,” the vocals are significantly lower in volume than the snare, with the mids pushing very aggressively forward, the highs being grainier, and the lows being cut by the high-pass filter. The shape of the vocals in these two tracks is also different - the compression is very weak in “Loungin’”, and in “1nce again”, on the contrary, it is very aggressive (especially Fifa’s voice).

Let's look at another modern track, say, Nicki Minaj - "Massive Attack".

Here in the vocals we have very clear highs and high mids. The vocals rise above the entire mix and don't have as low a mid-range as they do on "Loungin'."

Each of the three tracks has something specific:

  1. On “1nce again” the vocals sound sharp and aggressive, in keeping with the New York school of rap.
  2. “Loungin’” sounds very gentle and smooth, the way vocals are mixed mostly in R&B music.
  3. On “Massive Attack,” the vocals sound crystal clear, but leave room in the low end for the drums, which will work well in clubs.

The bottom line is that you must decide in advance how you will mix rap vocals. Who is the artist's audience, what style does the artist work in, where will the song be played, and how can you, as an engineer, bring it all together?

Let's say you've decided what you want to achieve... but how exactly to achieve it?

Cleaning

Before mixing rap vocals, in most cases they need to be cleaned up. The main reason for this is that vocals are often recorded in less-than-ideal environments, such as a closet (I experience this regularly) or a bathroom. I know this might sound strange, but there is a myth that recording rap vocals in the toilet or bathroom is a good idea. Actually this is not true. The second reason is that vocals are often recorded too loudly. Again, there is a myth that the louder you shout into the microphone, the better. This is categorically false, especially in the era of 24-bit audio encoding.

Sometimes rough vocal cleaning is enough because your options are limited. If the vocals were recorded too loudly, that is, with clips, distortion removal programs, for example, iZotope Rx De-Clipper, are ideal. In addition, distortion is usually concentrated at a certain frequency, so even an equalizer can cope with it.

For vocals that were recorded in a room with a lot of reverberation, treat them with a light gate, and careful use of EQ can also suppress the sound of the room, or you can try using programs like SPL De-Verb. Another way is to mix the track in such a way that the natural reverb of the vocals is appropriate and seems intentional.

The problem with vocals recorded in a closet or in the corner of a room is the comb filter. There is one trick that weakens comb filtration. If the vocal is recorded as a take, raise or lower the pitch of the take slightly. This will change the frequency bands that were filtered out, so that when the take is mixed down with the lead vocal, those frequencies will not be missing from the spectrum. The comb filtering will still be present in the vocals, but will be less obvious to the ear.

Treatment

Now you have clean vocals (or perhaps they came to you already clean). It's time to decide what to do with them. I can't tell you exactly how you should or shouldn't process vocals, but I can give you insight into some things to focus on.

Balance

It is extremely important to determine the relationship between vocals and other instruments in the same frequency range. Hip-hop is characterized by the relationship between vocals and drums, and first of all, the snare drum. If you can figure out how to fit both the vocals and the snare into the mix without them interfering with each other, the rest of the instruments will fall into place much faster and easier.

On "1nce Again" you'll notice that the snare is a little louder than the vocals, and is concentrated in the brighter part of the frequency spectrum, while the vocals are a little lower in level and concentrated more in the midrange. This was a conscious decision made by the engineer during the mix. But on “Loungin’,” the vocals are heard at the same volume as the snare. And in “Massive Attack” the vocals sound higher, although there is not a snare drum, but a percussion instrument playing on beats 2 and 4, and it sounds mainly in the low mids.

"Air"

When mixing rap vocals, almost no reverbs are used. There are three main reasons for this:

  1. Rap vocals are much more dynamic and have a much more important rhythmic component than sung vocals, and reverberation can blur rhythm and articulation.
  2. The idea of ​​hip-hop is for the voice to dominate the backing track and "in your face", and reverb tends to immerse the vocals in the stereo mix.
  3. All the other engineers mix rap without reverb. Not a very serious reason, but it still exists.

However, vocals could use a little more space, or “air.” The idea is that the space around the voice makes it more alive and vibrant. And for this you can use a very short, wide, quiet reverb. It's also a good idea to add a delay (echo) to the vocals, but to make the delay sound in the background, it's worth cutting off most of the high-frequency range. This will create a sense of deep three-dimensional space that will contrast with the main vocal, bringing the vocal to the forefront even more. Finally, if you're lucky enough to have the room sound recorded separately along with the vocals, adding a natural reverb track can give you very dry vocals with a sense of "air" around them. Compression with a very fast attack and relatively fast release time, and boosting the treble can also add air to the voice.

Compression

A little compression has never hurt any vocals, just to sit them better in the mix. But the main mistake people make when mixing hip-hop vocals is over-compression. A high level of compression only makes sense in mixes where many instruments are competing for sound space. When you read about rappers whose vocals have been processed with 4 compressors, it is most likely because the minus sounds very dense and strong compression of the vocal is simply necessary to cut through such a mix. Or is it a stylistic choice by the engineer to make the vocals “crunchy.”

Filtration

It's critical to decide which frequencies in the mix to filter out to help the vocals cut through the mix. For example, most audio engineers apply a high-pass filter to all tracks except the kick drum and bass itself. This clears space for low-frequency instruments. However, the importance of low-pass filtering is often overlooked. Synthesizers, even if they synthesize the bassline, can carry a lot of high-frequency components that are simply necessary for the mix to sound full and have a sense of space around the vocals. So it can't hurt to apply some low-pass filters to your rap vocals.

Also, going back to the high-pass filtering, unless you're doing something heavy in the Bob Power style, you don't need to apply hard high-pass filtering around 120Hz. The human voice, male and female, resonates down to 80 Hz (and sometimes even lower). Try some neat high-pass filtering at 70 – 80 Hz for vocals. Or maybe you don't need to apply a high-pass filter to your vocals at all...

Presence effect

Determining the frequency range for vocals is extremely important. Vocals that only sound midrange, like a telephone receiver, sometimes sound great in the mix. "Warm" vocals, focused on the low mids, can also be found. Usually, when achieving a natural sounding vocal with an effect of presence, get rid of the "throat" tones, which are in the range from 250 to 600 Hz (but do not mix based on numbers, listen, listen, and listen again). This method primarily emphasizes chest sounds. Sounds produced in the front of the mouth, tongue and teeth are somewhere in the region of 2 – 5 kHz.

Paulina Steel,

sound designer

So, after the vocals have been equalized, we can begin to process them dynamically. Since our articles are intended for a general reader, I would like to start this chapter with a small analysis of the phenomenon of dynamics. It’s interesting that when we asked people just starting their musical journey what dynamics in music are in their understanding, each of them answered differently, meaning something different, often subjective. Frequent answers included: speed, tension, movement, drama, etc. In fact, dynamics is not directly related to these concepts. Dynamics is related to volumes, namely their difference during the composition. In fact, in academic music, dynamics are denoted by f and p (forte and piano and their gradations).

As for the dynamic processing of vocals, here we will talk about working with much smaller quantities than those indicated in classical music, which is why those dealing with this processing must show extraordinary scrupulousness. So, as mentioned above, we will talk about loudness, based on this it is significant that the main parameter of dynamic processing devices is expressed in the following units - dB. What devices can be used for dynamic processing of acapella?

Let's look at what each of them can give us in terms of vocals.

Gate limits the signal depending on the input signal level. It has a certain threshold value (threshold), which the sound engineer sets himself and which is expressed in dB. Anything quieter than this value will not be allowed through the gate; in other words, it will be cut off.

On vocals, the gate is not used too often, more in cases related to restoration. Its operation is quite rough and is very noticeable on such a delicate instrument as vocals, so we recommend including a gate in the circuit only in extreme cases and with extreme caution.

Why can it be useful to us? If the vocals were recorded in not very professional conditions and we clearly hear background and extraneous noise between the words and if the vocalist has bad habits of slurping, smacking and breathing loudly between words. In the gate, you can set the threshold in such a way that all sounds quieter than the main signal will not be passed through. In our case, the main signal is vocals, it is the loudest, everything else is much quieter, although, of course, it all depends on the performer. It is worth noting that the gate may cut too roughly; in this case, you should try increasing the release indicator (signal recovery time). You should also listen to the gated acapella from start to finish without music. It should be borne in mind that by nature a person begins and ends words more quietly than he pronounces them in the middle, the same with the beginning and end of phrases, which means that a gate with an inaccurately set threshold can cut off all the endings of words from the vocalist. In the case of loud human breathing, using a gate may leave behind unwanted vestiges.

De-Esser. From the name it is clear that this device works to eliminate excess hissing and whistling sounds. Essentially, this is a bandpass compressor that operates starting at a frequency of 2k. Each person has his own dictation and features of the speech apparatus, which can be most noticeable in the area of ​​pronunciation of sibilants. Plus, there are many high-power microphones that are particularly sensitive in the same area. All this in a compartment can give depressing consequences in the sphere of sounds “s”, “sh”, “sch”, “ch”. Sometimes these effects may not be audible at first, but after using an equalizer and compressor, they become very obvious.

Most often, digital di-essers have presets like Male vocal or Female vocal. In most cases, they need adjustments to the frequency at which it operates, the threshold and the bandwidth. You should also remember that each person’s diction is unique, so the di-esser settings are individual in each case.

Often de-esser is not used at all as a means of smoothing out sibilants, especially in modern Western music. This approach also has a right to life and depends on the taste and hearing of the sound engineer.

De-esser can also be used to soften the tone of the voice in the frequency range from 2k to 4k. In people who have a harsh timbre or who are not very professional vocalists, when singing and speaking loudly, a clear rise in this range of the so-called second formant is often noticeable. In case the sound engineer decides to smooth it out using an equalizer, it will affect the entire signal, even in places where it is not needed. While a di-esser allows you to soften exactly those parts of the acapella that need it, due to the correct selection of the threshold level.

Inept handling of this device can accidentally make the vocalist sound hissing or too deaf. Therefore, you should carefully select the depth of the threshold, and also be careful about the degree of compression.

In the circuit, the gate and di-esser are usually placed in the gap before the equalizer and compressor, respectively.

Compressor. Usually placed at the break after the equalizer, but there are many points of view on this issue, and the debate on this issue can be considered incomplete.

The compressor, as a means of compressing the dynamic range, sort of averages the volume of the acapella, bringing it to approximately the same level. This level is the threshold (threshold) of the compressor, which the sound engineer must choose correctly. If the threshold is too high, the vocal will not be compressed at all, and if it is too deep, it will be compressed, which will lead to overdrive effects on the signal.

If different parts of a composition are sung or read at sufficiently different levels and this is part of the overall musical concept of the song, then all the quieter and louder parts should be divided into separate tracks and processed accordingly with different compressors with their own threshold levels, and perhaps with different ones altogether. settings.

There are several most common approaches to vocal compression:

- “natural” compression

Strong and noticeable compression

With natural compression, the vocal part retains its character. The work of the compressor in this case is almost unnoticeable and serves its direct purpose - smoothing out peaks. If the goal is to leave the vocalist as “alive” and airy as possible, then the compressor uses a very fast attack and a very fast release, while the threshold value is very high. If this value is too low, the voice will become more compressed and a nasal sound will appear in the voice.

Strong and noticeable compression serves to change the character of the part. For example, if vocals or speech are sluggish, unconvincing, with the help of a medium attack, a fairly deep threshold and a compression ratio of the compressor, you can make them more accentuated and expressive. If you make the attack and release values ​​of the device too large, and the threshold is too low, then you can accidentally shift the stress in words.

It also happens that two compressors are used on one vocal track: the first is for highlighting accents, the second is for general leveling of the part, or vice versa (sequential compression).

Possible manipulations with dynamic processing devices have a wide field for the creativity of the sound engineer. The operation of dynamic devices, and especially the compressor, is very subtle and may be inaudible to an inexperienced person. In this regard, the main thing in the approach to compression, as well as in other types of processing, is the main commandment of the sound engineer - do no harm.

TO BE CONTINUED....