#Mixing Essentials (4) 🎛 | Reverb and delay

The three essential categories of mixing discussed so far (compressor, equalizer, saturation) influence, in simple terms, the characteristics of waveforms, change the volume of certain parts of the sound spectrum from lows to highs, and their intuitively perceivable characteristics, which are described with linguistic metaphors such as “thin” or “full”, “smooth” or “rough”.

What has been ignored so far is the fact that sound waves propagate in a space. Either they reach a human ear directly, starting from their source, or they are first reflected at other places and are therefore directed to the listener’s point of view with a time delay and only partially. We quickly understand each other when we speak of reverberation in this context. A special case of this kind is the echo, where still recognizable single sounds repeat in a certain number and speed – depending on the distance from reflecting surfaces.

In audio mixing, the more general term delay is used, and its types include reverb and echo.

Here are three tutorials from the Recording Blog, delamartv and Martin Wolfinger on mixing reverb:

Martin Wolfinger also separately discusses the function of reverb for adding depth to a mix:

Big Z gives another English-language overview of essential aspects of reverb/reverb:

If you look at the setting options of reverb and delay, you will find a very wide range of style templates. Their effect is once again very variable in itself, in that the intensity of the application of the effect (optionally plus several parameters) produces quite different results (often up to irritating-disturbing distortions).

Of course, the stereo spectrum also plays a role for spatial effects, through which spatial positions of the sound source are signaled to the ear. Targeted imbalances in the delay to the left and right can also be used, for example, for psychoacoustic volume illusions.

In practice, the term delay often refers to either a limited use as a doubling of sound events at intervals of milliseconds (which already means another means of amplification besides level, compressor and EQ). At the other extreme are relatively long lingering, usually quieter repeats that either subtly create texture in the background to make the overall image fuller (without necessarily being clearly heard out) or are deliberately used to create clearly audible series of sound events. Thus, rather plainly played sequences of sounds can have more complex results. Far beyond the creation of certain realistic spatial illusions (denoted by styles such as “room” or “hall”), delays can then create extremely artificial soundscapes that were unprecedented before the corresponding effects units and plug-ins.

Continue Reading

#Mixing Essentials (3) 🎛 | Saturation

In the explanatory steps of this brief overview of mixing and mastering, we have now already traversed the dimensions of volume and frequency spectrum. They can be easily distributed in the visualization on a surface in height and width. However, with this we have also exhausted this visualization.

Another basic function like saturation can only be traced with listening experiences and their description. In general, this can perhaps be described as a manipulation of ‘density’ and ‘sound texture’ – depending also on the effects used in detail, which of course differ from each other. In a further metaphorical comparison, one can speak, among other things, of a ‘roughening’ or ‘plasticizing’ of the sound. The number of plug-ins for this – like all the others – is now large. The origin of these effects is a sound difference that arises between purely digital and thereby rather low-noise recording techniques compared to physical audio tapes of earlier times. It is still practiced today, depending on one’s preferences, to record tracks onto tape and play them back even after the fact. Many of the plugins refer to tape recorders by their names and/or design or even emulate special models of this hardware.

Holger Steinbrink explains it here for KEYS in German:

Here are three more tutorials on the basics of saturation from Musician on a Mission, Sage Audio, and Warren Huart of Produce Like A Pro:

Continue Reading

#Mixing Essentials (2) 🎛 | Equalizer (EQ)

While volume/amplitude is often shown on a vertical scale, the usual displays and diagrams show the sound spectrum from bass to treble on a horizontal scale. The most important instrument for intervening in mixing and mastering at this level is the equalizer, or EQ for short. In different fine divisions (besides lows, mids and highs still low or high mids etc.) mostly stepless curves are used today.

Here are three tutorials to get you started – by Martin Wolfinger, delamartv and Jonas Wagner from the Recording Blog.

Like levels and compressors and limiters for volume, this balances the interaction of multiple tracks on the scale of frequencies so that not too many signals compete with each other in the same range.

This balancing also includes rather local and sometimes somewhat extreme reductions of frequency ranges – where they lead to particularly noticeable disturbances. It is dull sound qualities that are created in this way in the overall picture and can be prevented with the EQ. For this purpose, individual tracks or the entire mix are scanned for such disturbing frequency ranges with the EQ – one makes particularly loud on a trial basis, which is then lowered in its amplitude at this point on the horizontal.

Philipp Ernst from abmischenlernen explains it:

To recognize this better in the interaction of the tracks, today additionally advanced plugins help, which mark the competing places in the frequency band by simultaneous comparison of tracks and corresponding visualization. The Anglo-Saxons call this problem “masking”, as explained here by Plugin Boutique using the example of the plugin “Neutron 3” by iZotope:

Continue Reading

#Mixing Essentials (1) 🎛 | Compressor

A first step in most sound recordings is usually a volume boost; hence the first part of the “Mixing Essentials” on the following topic. As opposed to just ‘turning up’ an entire volume, compressors are used – sometimes already between the instrument and the recording device. The compressor raises the volume of the weaker parts of the sound. This is because what can sometimes be heard well by the human ear in real space tends to be drowned out on the electrified signal path; and even more so in the interaction with other audio tracks.

KEYS author Holger Steinbrink explains a few basics here:

Philipp Ernst from abmischenlernen.de can also be consulted on the topic:

Rick Beato on more specific aspects:

As mentioned here in passing, compression goes from a strong flattening to clear upper limiting of the amplitude for which you use a limiter. At MixbusTV the difference is explained:

Joe Gilder once again summarizes basic functions and recommendations for the use of compressors in a different way. It is about the factors attack and release, about the application to single tracks or several in a mix bus, order of equalizer and compressor as well as the question of volume vs. sound quality.

Continue Reading

#Mixing Essentials (0) 🎛 | Prologue

Music is made. Music from loudspeakers still in a different way than the one someone plays on his instrument in the real room.

Accompanying everything else on the blog, I’d like to share with you a few glimpses into the engine room of music production – as a series of articles called “Mixing Essentials” that will continue here over the next few weeks. The focus will be on what’s new: even the now-established means of analog recording technology have almost completely transitioned to digital (with a few points of contention among experts).

GRÜBELBACH is so far a studio project. Behind acoustic guitar, human voice, electric guitar, synthesizer and a few ‘real’ microphones, the digital world begins with a simple interface, a small case with jacks and USB cable to the computer.

We are now talking about digital program solutions almost everywhere in the recording world. Even the classical ensemble can no longer do without digital editing tools, starting with the first editing steps.

In principle, this looks like this on the computer screen:

cubase projektfenster

I currently work with Cubase 10, and industry standards are Logic Pro (for Apple users), Ableton Live and Avid Pro Tools. Newer products of this kind are Studio One and Reaper (here at bonedo an overview of such programs). These are “digital audio workstations” (DAW). They replace on a personal computer the formerly space-consuming apparatus of amplifiers, effects units, tape recorders and mixing desks. What remains are the tracks arranged one above the other with their clips (corresponding to snippets of tape), as well as all kinds of buttons to the left of them, which are used to access the aforementioned functions. Each audio track also has the so-called “Automation” fold-out, in which, in addition to volume, various effect parameters can be controlled – mostly variably on the timeline. To the effects one says equivalently “Plugins”.

The German-language term for “sound mixing” is nowadays usually called “mixing”. The same means are then applied in a final work step in usually smaller number and strength in the “mastering”. In the professional music business, these processes are usually distributed among several heads. Depending on the effort involved, there are of course specialized sound and recording technicians there who still have a lot to do with physical equipment. In terms of working time, there is also a serious difference between recording a drum kit with several microphones. The more you do digitally, the less unwanted interference effects you can expect.

Stylistically, of course, there’s a big difference, although as you go more and more analog, recorded sounds can sound more and more technical. At several points – during recording, in the transfer to the DAW, and in the post-processing of individual tracks or the entire project in mixing and mastering – a wide variety of equipment can also still be used, for which plug-ins are now available as a substitute. Some mixing engineers hear differences and swear by their traditional analog standards.

One more pair of terms is worth mentioning: In DAWs, there are two types of tracks – waveform audio and MIDI tracks. The former should be familiar to anyone who has ever seen a visualization of a sound wave. MIDI means “Musical Instrument Digital Interface” and has been in use since 1982. These are signals that are transmitted mainly with keyboards or synthesizers – technically completely unambiguous indications of which tone is played for how long, sometimes with which velocity modulation. According to the almost infinite sound worlds of the synthesizers, all kinds of instruments can be emulated, from the piano to the individual parts of a drum kit.

The individual sound events are then displayed in the DAW in the clip as narrow stripes, to which the volume is annotated below as a vertical stripe:

cubase midi spur
Cubase MIDI-Spur

Since after the eras of vinyl records and CDs music is nowadays transported to the listeners mostly via the internet, a few conditions of music production have also changed in the creative-sound-technical field. The most important term is the “loudness war”. On the one hand, there are partial regressions in playback devices, so that any song should sound halfway acceptable through simpler notebook speakers or even mono speakers from mobile phones. Each platform also uses its own algorithms to adjust the supplied sound files to a consistent volume level. In order for music to sound rich and sufficiently loud in comparison under these conditions, the familiar means of mixing, which will be the subject of the following articles, must be modified and amplified to some extent.

If you are a professional with time and money, you can make such efforts as you like and have fun with them. In any case, we see a technical development that deserves the name “revolution”. It makes possible namely by simplifications in the handling, the program-technical implementation into the computer, ergo drastic cost reduction: the application by all interested ones – in the first place naturally those, which make music themselves.

The following articles in this series give, mostly with the help of individual of the many video tutorials on YouTube, an overview of the most important design tools that are used in current music production.

Continue Reading