Get Audio Mixing Recorded Music essential facts below, , or join the Audio Mixing Recorded Music discussion. Add Audio Mixing Recorded Music to your PopFlock.com topic list for future reference or share this resource on social media.
Audio Mixing Recorded Music
Digital Mixing Console Sony DMX R-100 used in project studios
In sound recording and reproduction, audio mixing is the process of combining multitrack recordings into a final mono, stereo or surround sound product. In the process of combining the separate tracks, their relative levels (i.e. volumes) are adjusted and balanced and various processes such as equalization and compression are commonly applied to individual tracks, groups of tracks, and the overall mix. In stereo and surround sound mixing, the placement of the tracks within the stereo (or surround) field are adjusted and balanced.:11,325,468 Audio mixing techniques and approaches vary widely and have a significant influence on the final product.
In the late 19th century, Thomas Edison and Emile Berliner developed the first recording machines. The recording and reproduction process itself was completely mechanical with little or no electrical parts. Edison's phonograph cylinder system utilized a small horn terminated in a stretched, flexible diaphragm attached to a stylus which cut a groove of varying depth into the malleable tin foil of the cylinder. Emile Berliner's gramophone system recorded music by inscribing spiraling lateral cuts onto a vinyl disc.
Electronic recording became more widely used during the 1920s. It was based on the principles of electromagnetic transduction. The possibility for a microphone to be connected remotely to a recording machine meant that microphones could be positioned in more suitable places. The process was improved when outputs of the microphones could be mixed before being fed to the disc cutter, allowing greater flexibility in the balance.
Before the introduction of multitrack recording, all sounds and effects that were to be part of a record were mixed at one time during a live performance. If the recorded mix wasn't satisfactory, or if one musician made a mistake, the selection had to be performed over until the desired balance and performance was obtained. With the introduction of multi-track recording, the production of a modern recording changed into one that generally involves three stages: recording, overdubbing, and mixing.
Modern mixing emerged with the introduction of commercial multi-track tape machines, most notably when 8-track recorders were introduced during the 1960s. The ability to record sounds into separate channels meant that combining and treating these sounds could be postponed to the mixing stage.
A mixer (mixing console, mixing desk, mixing board, or software mixer) is the operational heart of the mixing process. Mixers offer a multitude of inputs, each fed by a track from a multitrack recorder. Mixers typically have 2 main outputs (in the case of two-channel stereo mixing) or 8 (in the case of surround).
Summing signals together, which is normally done by a dedicated summing amplifier or, in the case of a digital mixer, by a simple algorithm.
Routing of source signals to internal buses or external processing units and effects.
On-board processors with equalizers and compressors.
Mixing consoles can be large and intimidating due to the exceptional number of controls. However, because many of these controls are duplicated (e.g. per input channel), much of the console can be learned by studying one small part of it. The controls on a mixing console will typically fall into one of two categories: processing and configuration. Processing controls are used to manipulate the sound. These can vary in complexity, from simple level controls, to sophisticated outboard reverberation units. Configuration controls deal with the signal routing from the input to the output of the console through the various processes.
Digital audio workstations (DAW) can perform many mixing features in addition to other processing. An audio control surface gives a DAW the same user interface as a mixing console. The distinction between a large console and a DAW equipped with a control surface is that a digital console will typically consist of dedicated digital signal processors for each channel. DAWs can dynamically assign resources like digital audio signal processing power, but may run out if too many signal processes are in simultaneous use. This overload can often be solved by increasing the capacity of the DAW.
Outboard gear and plugins
Outboard gear (analog) and audio plug-ins (digital) can be inserted into the signal path to extend processing possibilities. Outboard gear and plugins fall into two main categories:
Processors - these devices are normally connected in series to the signal path, so the input signal is replaced with the processed signal. Examples include equalization, dynamic processing (compressors, gates, expanders, and limiters). However, some processors are also used in parallel, as is the case in techniques such as parallel compression/limiting (a.k.a. New York compression) and sidechain equalization.
Effects - these can be considered as any unit that has an effect upon the signal, the term is mostly used to describe units that are connected in parallel to the signal path, and therefore they add to the existing sounds but do not replace them. Examples of common effects include reverb and delay. Some effects are more commonly used in series like chorus, flange, and vibrato.
Multiple level controls in signal path
A single signal can pass through a large number of level controls, e.g. individual channel fader, subgroup master fader, master fader and monitor volume control. According to audio engineer Tomlinson Holman, problems are created due to the multiplicity of the controls. Each and every console has their own dynamic range and it is important to utilize the controls correctly to avoid excessive noise or distortions.:174
Processes that affect levels
Faders - Used to attenuate or boost the level of signals. The most commonly used process is level control, which is used even on the simplest of mixers.:177
Panning - A panning control, or pan pot, changes the proportional level of a signal that is sent to each speaker in stereo and surround sound systems. This gives the listener the impression that the signal is coming from a particular direction. This process is based on the phenomenon of sound localization, which is the listener's ability to perceive the direction of a sound by the differences in intensity and arrival-time to our left and right ears.:49,344
Compressors - A device which attenuates the volume of a track when its volume passes beyond a set threshold. The primary use of a compressor in mixing is to limit the dynamic range of a signal. Compressors are equipped with a number of controls including the threshold, the amount of compression (e.g. Ratio), and how quickly or slowly the compressor acts (e.g. Attack and Release).:175
Noise gate or expansion - The Expansion device does exactly the opposite of what the compressor does. It increases the volume range of a source and may do so across a wide dynamic range or may be restricted to a narrower region by control functions. Restricting expansion to only low-level sounds helps to minimize noise. This function is often referred to as downward expansion, noise gating, or keying and reduces the level below a threshold set by a specific control.:176
Limiters - A limiter is a compressor with a Ratio of 10:1 or higher. Some limiters have extremely high (or infinite) ratios, often referred to as a "brick-wall" limiter, meaning that little to no audio is allowed to surpass the threshold. Limiters are most commonly used in mixing to strictly limit the maximum output volume of a track, bus, or overall mix. Limiters are especially useful in digital mixing to avoid clipping.:176
Processes that affect frequency response
There are two principle frequency response processes:
Equalizers - The simplest description of EQ is the process of altering the frequency response in a manner similar to what tone controls do on a stereo system. Professional EQs dissect the audio spectrum into three or four parts which may be called the low-bass, mid-bass, mid-treble, and high-frequency controls.:178
Filters - Filters are used to eliminate certain frequencies from the output. Filters strip off part of the audio spectrum. There are various types of filters. A high-pass filter (low-cut) is used to remove excessive room noise at low frequencies. A low-pass filter (high-cut) is used to help isolate a low-frequency instrument playing in a studio along with others. And a band-pass filter is a combination of high- and low-pass filters, also known as a telephone filter (because a sound lacking in high and low frequencies resembles the quality of sound over a telephone).
Processes that affect time
Reverbs - Reverbs are used to simulate acoustic reflections in a real room, adding a sense of space and depth to otherwise "dry" recordings. Another use is to distinguish among auditory objects; all sound having one reverberant character will be categorized together by human hearing in a process called auditory streaming. This is an important technique in creating the illusion of layered sound from in front of the speaker to behind it.:181 Before the advent of electronic reverb and echo processing, physical means were used to generate the effects. An echo chamber, a large reverberant room, could be equipped with a speaker and microphones. Signals were then sent to the speaker and the reverberation generated in the room was picked up by the two microphones.
Processes that affect space
Panning - While panning is a process that affects levels, it also can be considered a process that affects space since it is used to give the impression of a source coming from a particular direction. Panning allows the engineer to place the sound within the stereo or surround field, giving the illusion of a sound's origin having a physical position.
Pseudostereo creates a stereo-like sound image from monophonic sources. This way the apparent source width or the degree of listener envelopment is increased. A number of pseudostereo recording and mixing techniques are known from the viewpoint of audio engineers and researchers.
The mixdown process converts a program with a multiple-channel configuration into a program with fewer channels. Common examples include downmixing from 5.1 surround sound to stereo,[a] and stereo to mono. Because these are common scenarios, it is common practice to verify the sound of such downmixes during the production process to ensure stereo and mono compatibility.
The alternative channel configuration can be explicitly authored during the production process with multiple channel configurations provided for distribution. For example, on DVD-Audio or Super Audio CD, a separate stereo mix can be included along with the surround mix. Alternatively, the program can be automatically downmixed by the end consumer's audio system. For example, a DVD player or sound card may downmix a surround sound program to stereo for playback through two speakers.
Mixing in surround sound
Any console with a sufficient number of mix busses can be used to create a 5.1 surround sound mix, but this may be frustrating if the console is not specifically designed to facilitate signal routing, panning and processing in a surround sound environment. Whether working in an analog hardware, digital hardware, or DAW mixing environment, the ability to pan mono or stereo sources and place effects in the 5.1 soundscape and monitor multiple output formats without difficulty can make the difference between a successful or compromised mix. Mixing in surround is very similar to mixing in stereo except that there are more speakers, placed to surround the listener. In addition to the horizontal panoramic options available in stereo, mixing in surround lets the mix engineer pan sources within a much wider and more enveloping environment. In a surround mix, sounds can appear to originate from many more or almost any direction depending on the number of speakers used, their placement and how audio is processed.
There are two common ways to approach mixing in surround:
Expanded Stereo - With this approach, the mix will still sound very much like an ordinary stereo mix. Most of the sources such as the instruments of a band, the vocals, and so on, will still be panned between the left and right speakers, but lower levels might also be sent to the rear speakers in order to create a wider stereo image, while lead sources such as the main vocal might be sent to the center speaker. Additionally, reverb and delay effects will often be sent to the rear speakers to create a more realistic sense of being in a real acoustic space. In the case of mixing a live recording that was performed in front of an audience, signals recorded by microphones aimed at, or placed among the audience will also often be sent to the rear speakers to make the listener feel as if he or she is actually a part of the audience.
Complete Surround/All speakers treated equally - Instead of following the traditional ways of mixing in stereo, this much more liberal approach lets the mix engineer do anything he or she wants. Instruments can appear to originate from anywhere, or even spin around the listener. When done appropriately and with taste, interesting sonic experiences can be achieved, as was the case with James Guthrie's 5.1 mix of Pink Floyd's The Dark Side of the Moon, albeit with input from the band. This is a much different mix from the 1970s quadrophonic mix.
Naturally, these two approaches can be combined in any way the mix engineer sees fit. Recently, a third approach to mixing in surround was developed by surround mix engineer Unne Liljeblad.
MSS - Multi Stereo Surround - This approach treats the speakers in a surround sound system as a multitude of stereo pairs. For example, a stereo recording of a piano, created using two microphones in an ORTF configuration, might have its left channel sent to the left-rear speaker and its right channel sent to the center speaker. The piano might also be sent to a reverb having its left and right outputs sent to the left-front speaker and right-rear speaker, respectively. Additional elements of the song, such as an acoustic guitar recorded in stereo, might have its left and right channels sent to a different stereo pair such as the left-front speaker and the right-rear speaker with its reverb returning to yet another stereo pair, the left-rear speaker and the center speaker. Thus, multiple clean stereo recordings surround the listener without the smearing comb-filtering effects that often occur when the same or similar sources are sent to multiple speakers.
^The left and right surround channels are blended with the left and right front channels. The center channel is blended equally with the left and right channels. The LFE channel is either mixed with the front signals or not used.
^Ziemer, Tim (2017). "Source Width in Music Production. Methods in Stereo, Ambisonics, and Wave Field Synthesis". In Schneider, Albrecht (ed.). Studies in Musical Acoustics and Psychoacoustics. Current Research in Systematic Musicology. 4. Cham: Springer. pp. 299-340. doi:10.1007/978-3-319-47292-8_10. ISBN978-3-319-47292-8.