Although you may not be aware of it, the way we perceive sounds is strongly affected by a series of psychological effects known as psychoacoustics. Our hearing is not quite so clear-cut as you might imagine, and a number of psychoacoustic phenomena can dramatically affect the way we respond to the sounds we hear. Now we'll take a brief look at three very important characteristics of audio-frequency, volume and timing - in order to show some of the ways the human ear responds to sound and how our perception can be affected by subtle variables.
The way we perceive the loudness of sounds is critical to a number of processes in the studio, and all things are not equal. Our ear's frequency response is not linear, meaning that we perceive some sounds to be louder than others despite the fact that they may measure at the same sound pressure level. The audible frequency range is usually approximated to 20Hz - 20kHz (although this varies on an individual basis), but the ears most sensitive to sounds around 2-4kHz.
Note also that 2-4 kHz is the typical frequency range of the human speaking voice and the sound of babies crying - the ear is naturally biased towards the most important sounds that we hear. One important point to be made here is that your perception of the loudness of different frequencies changes significantly at different levels. At an overall lower listening volume, low and high frequencies appear to be much quieter relative to the midrange. Anyone with a slightly older hi-fi will be familiar with the now unfashionable 'loudness' button, which boosted the low and high frequencies at lower volumes in an attempt to compensate for this discrepancy.
This phenomenon helps to explain why your mixes can sound significantly different when you turn the volume up and down. One of the most common mistakes made when mixing is to monitor at a high level and push all the midrange elements up, which leaves them sounding unbalanced at lower levels. The classic solution is to mix at a low volume, especially when checking the relative levels of vocals and other crucial midrange elements.
However, there's a strong argument in favor of checking final mix EQ at a higher level if you're making music that will primarily be listened to at high volumes in clubs. Either way, picking an approach and sticking to it is probably the best bet for consistent mixes.
Out of Range
It's worth considering the effect of frequencies outside the audible frequency range here too. We may not be able to hear subsonic or hypersonic frequencies, but they can't be disregarded. In particular, the weight imparted by subsonic frequencies plays a major part in how we feel music.
To get an idea of how this works, try taking a heavy track with lots of sub-bass that you know well. Listen to it with the whole track running through a steep high-pass filter at around 80 Hz, then gradually reduce the filter frequency. You'll notice as you reduce the frequency that there comes a point where all the fundamental frequencies are present, but the track lacks weight and doesn't feel like it physically hits you as much as before.
Although most of us can't hear sound above 20kHz (and many of us struggle from as low as 14-16kHz), studies have suggested that humans perceive music differently when hypersonic frequencieS are present. This has also been suggested as one of the weaknesses of the Red Book audio CD format (which can only reproduce frequencies up to the Nyquist frequency of 22.05 kHz). Just because they're not obviously present, it's important not to forget these inaudible frequencies.
Louder is Better?
Scientific research has shown that listeners consistently perceive louder versions of a recording to sound subjectively better than quieter versions of exactly the same waveform. The Implications of this research to musicians are significant.
Obviously, it suggests that if we listen to our mixes at a higher volume we'll perceive them to be better. In a music production and mixing context, the temptation to push the volume up in order to make the track sound better is powerful, but the effects are highly undesirable, massively affecting our subjective response to sounds. Not only are louder sounds subjectively better, but our sense of pitch is also seriously affected by volume. The effects are more extreme at the limits of our hearing range, making low notes sound lower when played louder and high notes sound higher. This can occasionally be problematic for instrument tuning and vocal pitching. As a result, making critical judgments at higher volumes is best avoided.
It's All About Timing
However, the most significant consequence for most of us will come before the final mixing process, and relates more closely to production and sound design. When synthesizing sounds, choosing samples and applying effects processors to a track, we often make comparisons between two versions of a track at different levels, and this is where the subjective 'louder is better' effect can seriously warp our judgment. A number of effects - from compressors with incorrectly adjusted makeup gain to overdrive plug-ins- impart a volume change to your signal, and it's all too easy to trick yourself into believing an effect is improving the sound simply because it's boosting the volume. When adding effects careful balancing of gain settings and use of the bypass button are highly recommended to avoid tricking your ears into thinking a louder version is better.
The timing of signals massively affects our perception of their position in a 3D space. Since sound waves take time to travel from their source to our ears, pushing back the timing of one element (relative to others) gives the illusion that its source is moving further away from us. It's quite easy to experiment with timing delays in order to understand this phenomenon. Since sound travels quickly, it only takes very very small timing differences to hear the effect.
The speed at which sound waves travel through air depends on the barometric pressure, temperature and humidity of the environment, but for our purposes here it's easily approximated to one foot per millisecond. Try adding a delay plug-in to one element of a mix, set to 100% wet signal and gradually increasing the delay time in the order of milliseconds. A 1O ms delay should be enough to hear the sound get pushed back behind other elements of your mix. Of course, there's more to 3D space than timing, most notably left-right sound location. Furthermore, the frequencies of sounds are affected by their environment The sound of a dog barking half a mile away is not just a delayed version of the same waveform we'd hear if we were standing next to the dog. For one, it is quieter, but it also has a different tonal characteristic as a result of the diffusion of the sound over space.
Disregarding echoes for a moment, the sound appears to lose some of its high frequencies as it travels through the environment to us. Since high frequencies are diffused, gently roll ing off the top end of the delayed track with a low-pass filter can also help to recreate the effect of a sound coming from further away. Echoes are also a crucial factor in the way our ears perceive the sounds we hear. Most natural environments will impart echoes and reverberations on a sound, and our ears are surprisingly adept at decoding the acoustic environment from sounds we hear. On an obvious level, a sound recorded in a concert hall will have a significantly different sonic signature to one recorded in a small club. The decay time of the reverb and the time delay between the original sound and early reflections are two of the most important characteristics of reverberations, and our hearing uses this information to calculate the size and shape of the 3D space of sounds we hear.
So how do I do it?
Even though we've only covered a fraction of the many psychoacoustic phenomena in, you may well be wondering how on earth you can account for all of them as you work. With so many interacting variables, surely it's impossible? The good news is that you're probably already compensating.
For example, if you have ears (and you probably do!) you're already 'using' equal loudness contours as you make music. likewise, most producers will almost instinctively compensate for timing discrepancies, subtle shifts in frequency balance and 3D imaging as they work. The biggest practical lesson we can take from this is that consistency is essential. If you wondered why your mix sounded great at full volume but the vocals popped out when you turned it down, or if you wondered why your track lacked 3D depth, psychoacoustics can help you explain.