A first step in most sound recordings is usually a volume boost; hence the first part of the “Mixing Essentials” on the following topic. As opposed to just ‘turning up’ an entire volume, compressors are used – sometimes already between the instrument and the recording device. The compressor raises the volume of the weaker parts of the sound. This is because what can sometimes be heard well by the human ear in real space tends to be drowned out on the electrified signal path; and even more so in the interaction with other audio tracks.
KEYS author Holger Steinbrink explains a few basics here:
Philipp Ernst from abmischenlernen.de can also be consulted on the topic:
Rick Beato on more specific aspects:
As mentioned here in passing, compression goes from a strong flattening to clear upper limiting of the amplitude for which you use a limiter. At MixbusTV the difference is explained:
Joe Gilder once again summarizes basic functions and recommendations for the use of compressors in a different way. It is about the factors attack and release, about the application to single tracks or several in a mix bus, order of equalizer and compressor as well as the question of volume vs. sound quality.
Music is made. Music from loudspeakers still in a different way than the one someone plays on his instrument in the real room.
Accompanying everything else on the blog, I’d like to share with you a few glimpses into the engine room of music production – as a series of articles called “Mixing Essentials” that will continue here over the next few weeks. The focus will be on what’s new: even the now-established means of analog recording technology have almost completely transitioned to digital (with a few points of contention among experts).
GRÜBELBACH is so far a studio project. Behind acoustic guitar, human voice, electric guitar, synthesizer and a few ‘real’ microphones, the digital world begins with a simple interface, a small case with jacks and USB cable to the computer.
We are now talking about digital program solutions almost everywhere in the recording world. Even the classical ensemble can no longer do without digital editing tools, starting with the first editing steps.
In principle, this looks like this on the computer screen:
I currently work with Cubase 10, and industry standards are Logic Pro (for Apple users), Ableton Live and Avid Pro Tools. Newer products of this kind are Studio One and Reaper (here at bonedo an overview of such programs). These are “digital audio workstations” (DAW). They replace on a personal computer the formerly space-consuming apparatus of amplifiers, effects units, tape recorders and mixing desks. What remains are the tracks arranged one above the other with their clips (corresponding to snippets of tape), as well as all kinds of buttons to the left of them, which are used to access the aforementioned functions. Each audio track also has the so-called “Automation” fold-out, in which, in addition to volume, various effect parameters can be controlled – mostly variably on the timeline. To the effects one says equivalently “Plugins”.
The German-language term for “sound mixing” is nowadays usually called “mixing”. The same means are then applied in a final work step in usually smaller number and strength in the “mastering”. In the professional music business, these processes are usually distributed among several heads. Depending on the effort involved, there are of course specialized sound and recording technicians there who still have a lot to do with physical equipment. In terms of working time, there is also a serious difference between recording a drum kit with several microphones. The more you do digitally, the less unwanted interference effects you can expect.
Stylistically, of course, there’s a big difference, although as you go more and more analog, recorded sounds can sound more and more technical. At several points – during recording, in the transfer to the DAW, and in the post-processing of individual tracks or the entire project in mixing and mastering – a wide variety of equipment can also still be used, for which plug-ins are now available as a substitute. Some mixing engineers hear differences and swear by their traditional analog standards.
One more pair of terms is worth mentioning: In DAWs, there are two types of tracks – waveform audio and MIDI tracks. The former should be familiar to anyone who has ever seen a visualization of a sound wave. MIDI means “Musical Instrument Digital Interface” and has been in use since 1982. These are signals that are transmitted mainly with keyboards or synthesizers – technically completely unambiguous indications of which tone is played for how long, sometimes with which velocity modulation. According to the almost infinite sound worlds of the synthesizers, all kinds of instruments can be emulated, from the piano to the individual parts of a drum kit.
The individual sound events are then displayed in the DAW in the clip as narrow stripes, to which the volume is annotated below as a vertical stripe:
Since after the eras of vinyl records and CDs music is nowadays transported to the listeners mostly via the internet, a few conditions of music production have also changed in the creative-sound-technical field. The most important term is the “loudness war”. On the one hand, there are partial regressions in playback devices, so that any song should sound halfway acceptable through simpler notebook speakers or even mono speakers from mobile phones. Each platform also uses its own algorithms to adjust the supplied sound files to a consistent volume level. In order for music to sound rich and sufficiently loud in comparison under these conditions, the familiar means of mixing, which will be the subject of the following articles, must be modified and amplified to some extent.
If you are a professional with time and money, you can make such efforts as you like and have fun with them. In any case, we see a technical development that deserves the name “revolution”. It makes possible namely by simplifications in the handling, the program-technical implementation into the computer, ergo drastic cost reduction: the application by all interested ones – in the first place naturally those, which make music themselves.
The following articles in this series give, mostly with the help of individual of the many video tutorials on YouTube, an overview of the most important design tools that are used in current music production.
Wie bitte? Nach 9 Monaten hat ein Musikvideo zweier attraktiver Damen, angenehm schräg im Konzept, mit Retro-Appeal und easy listenbarer Musik fantastisch überschminkt gerade einmal 20.000 Zuschauer? Das Ganze beim Label Grönland Records?
Das Duo Children mit “Hype”:
Ein einprägsamer samtiger E-Bass-Sound (hier sieht man ihn gespielt on camera) verweist im aktuellen Pop-Geschehen schon sehr deutlich auf Tame Impala und dessen Lolita-lastiges Video zu “The less I know the better” von 2015 (95 Mio. YouTube-Aufrufe):
Hier gibt’s noch genauere Erklärungen zur Erzeugung des Bass-Sounds: