www.ethanwiner.com - since 1997 |
Dispelling Popular Audio Myths
by Ethan Winer
This article first appeared in the September 1998 issue of Audio Media magazine (UK edition).
Read all about
Ethan's |
Most engineering fields require a college degree or at least state certification, and for good reason: If you design a drawbridge or high-rise office building, you'd better be able to back up your proposal with irrefutable science proving the design really works and people won't die. But the audio recording field has no such formal requirements. Anyone can claim to be an audio "engineer" and go about his or her business. Indeed, if you can produce recordings that sound good, nobody will argue about math or electronics theory - a great sound is all the credentials you need.
Every successful audio engineer knows how to get a good sound, but the lack of a solid technical foundation prevents many from fully understanding why what they do works. The result is endless arguments over the value of gold-plated connectors and audiophile speaker cable, whether a frequency response beyond 20 KHz really makes a difference, why tube-based amplifiers sound "better" than solid state designs, and so forth. I call this the Astrology Factor because so many opinions are stated as fact but with little or no supporting scientific evidence. The consumer audiophile world is full of such claims that cannot be substantiated, and many of these have spilled over into professional audio circles. As Carl Sagan used to say, we all need a well-equipped Baloney Detection Kit.
Most studio owners don't have an unlimited budget and must spend what funds they have as wisely as possible. The purpose of this article is to help you distinguish truth from fiction, so you can determine what is and is not worthwhile. A smart consumer spends no more than is necessary. If you have a modest home studio and an extra grand for some new gear, what is likely to make a bigger improvement to your mixes: that cool new digital reverb unit, or paying someone to replace all the capacitors in your mixer?
Experience has shown that it is futile to claim I know what someone else can and cannot hear. Therefore, I will relate only those things that make a difference to my experienced ears, and also explain what makes sense from the perspective of science and logic. Most of what follows is fact, but anything you construe as opinion is mine alone and does not necessarily reflect the views of my sponsors.
Myth: Even though people cannot hear frequencies above 20 KHz, it is important that audio equipment be able to reproduce higher frequencies to maintain clarity.
Fact: There is no evidence that a frequency response beyond what humans can hear is audible or useful. It is true that good amplifier designs generally have a frequency response well beyond the limits of hearing, and the lack of an extended response can be a give-away that the amplifier is deficient in some other areas. If for no other reason, though there certainly are other reasons, an amplifier's effective cut-off frequency - the point at which its output has dropped by 3 dB - must be high enough that the response loss at 20 KHz is still well under 1 dB.
With audio transducers - microphones and speakers - the frequency beyond which they do not respond (the cut-off frequency) is often accompanied by a resonant peak, which can add ringing and a boost in level at that frequency. Therefore, designing a transducer to respond beyond 20 KHz is useful because it pushes any inherent resonance past audibility. This is one important advantage of those expensive condenser microphones that use a tiny (less than 1/2-inch) diaphragm and are designed for critical audio testing.
It is very easy to determine, once and for all, if a response beyond 20 KHz makes a difference. All you need is a sweepable audio low-pass filter. You start with the filter set to well beyond 20 KHz, play the audio source material of your choice - I've used a set of keys jingling in front of a high-quality, small-diaphragm condenser mike - and sweep the filter downward until you can hear a difference. Then read the frequency noted on the dial.
Myth: Digital audio sounds worse than analog, and the lack of digital's fidelity is revealed as a sterile and harsh sound that lacks warmth, depth, imaging, clarity, and any number of other vague and elusive descriptions.
Fact: Analog tape compresses dynamics and adds distortion, which can be a pleasing effect for many people (including me). But for pure faithfulness to the original signal, modern pro-quality digital wins hands down every time. It is true that when digital audio is recorded at too low a level, the result can sound grainy. This distortion is in addition to the hiss that an analog recording also has, and it is caused by using an insufficient number of bits. That is, recording at too low a level on a 16-bit system is similar to recording at a normal level on an 8-bit system.
Vintage analog synthesizers may sound "warmer" than current digital models, but only because of the distortion inherent in their design. That wonderful fatness is the result of pushing the analog VCF and VCA circuits to their limits, in an effort to obtain a usable signal to noise ratio. But there is no reason a modern sampling synth cannot reproduce, if not generate, those same sounds exactly if given a proper source signal.
Myth: Gold-plated connectors sound better than connectors made with tin or nickel.
Fact: Gold does not tarnish, and tarnished connectors can cause problems, but there is nothing inherent in gold that makes it sound better than a clean connection using standard materials. Further, it is possible for connections using dissimilar metals to oxidize and deteriorate more quickly than if the same metal were used. So, mating a gold plug with a non-gold jack could theoretically make things even worse! Moreover, connectors plated with gold often use a very thin coating because of gold's high cost, and that plating can wear off with repeated plugging and unplugging. Therefore, while it would be unfair and untrue to say that gold connectors are a bad thing, unless both connectors are gold they are at best a waste of money and at worst a potential for eventual trouble.
Myth: Using audiophile speaker cables improves the sound, compared to an equally heavy gauge of normal electrical wire.
Fact: The most important feature speaker wire can possess is low resistance at audio frequencies. The makers of expensive audiophile speaker cable claim their products are better because they have a frequency capability that extends into the MHz range. But there is no evidence that wire capable of carrying frequencies many times higher than what it will actually carry is useful or worth the extra money. Some of these cables are made up of many separate strands that are individually insulated - this arrangement is called Litz wire - to combat "skin effect." Skin effect is the propensity of current to flow on only the outer surface of a wire at high frequencies. Since the inner portion of the wire carries less current, the wire's overall effective resistance is greater at the very high frequencies. But skin effect occurs in substantial amounts only at frequencies many times higher than what humans can hear.
The only truly negative effects you could attribute to speaker cable are too high a resistance (which affects an amplifier's damping factor), and high frequency losses due to cable inductance and capacitance. But you would need a long cable length before the reactive components (inductance and capacitance) affect anything within the audio range. Damping factor is the ability of an amplifier to absorb voltage fed back to it from the speaker. When you send a tone to a standard magnetic loudspeaker and then stop that tone, inertia causes the cone to continue vibrating. And as it vibrates a voltage is generated. The amplifier's output circuit attempts to halt that vibration by presenting a low impedance load - ideally, zero Ohms (a short circuit). So, while low-resistance wiring is clearly important, nearly any sufficiently heavy wire will suffice for a speaker cable in the lengths used by most recording studios. Heavy gauge zip cord is ideal for runs of twenty feet or less, and it's readily available in #14 and even thicker gauges.
Myth: Amplifiers based on vacuum tubes sound better than solid state designs, and a good tube preamp can even restore clarity and warmth that has been lost in the digital recording process.
Fact: Both types of amplifiers can have a frequency response flat enough for audio reproduction. But modern solid state amplifiers have measurably lower distortion than any tube-based design. Most tube-based power amplifiers also require an output transformer, which increases distortion - especially at the frequency extremes. Further, solid state power amps always have a better damping factor (see Audiophile Speaker Cables above).
Many people - including me - like the sound of tubes, especially in a good guitar amp. When driven to a point approaching distortion, tube circuits react more smoothly and with less harshness than solid state circuits. More precisely, tube distortion has a gradual onset that yields less "buzz," when compared to solid state devices that have a more clearly defined overload threshold and thus generate more high frequencies when driven to the point of distortion.
Even if you prefer the sound of tubes, please understand they simply cannot restore any quality that was lost earlier in the recording process. All a tube preamp can do is add an effect that you may find pleasing. Studio monitor amplifiers should never have a "sound;" if they do, they are in error. Tube circuits can affect the sound in a way that is similar to analog tape recorders, and you may in fact find that pleasing. I won't dispute that even-order distortion can sound good, by adding overtones that are richer than odd-order distortion, which is, musically speaking, dissonant fifths. However, all distortion adds intermodulation (IM) products that are not harmonically related to the source material, and are thus decidedly non-musical.
Myth: Psychoacoustic processors work their magic by correcting phase errors introduced elsewhere in the audio chain.
Fact: Some of these units use frequency-dependent compression to make a track seem to sound "clearer," and others do so by mixing in high-frequency components of added distortion. Phase shift within a single audio channel makes no difference per se unless it is currently changing; however, phase shift in only one channel of a two-channel signal can affect the stereo localization.
Phase is one of those elusive terms that gets tossed around too often by people who don't really understand it. First, the term phase is often used incorrectly when what is really meant is polarity. If you exchange the wires in an audio signal path such that a positive voltage makes the loudspeaker draw into the cabinet rather than push outward you are inverting the polarity, not shifting the phase. Phase shift is simply a small amount of time delay, where the amount of delay varies with frequency. More to the point, phase shift occurs naturally and unavoidably in all speakers and crossovers, and in every EQ circuit, with no obvious detriment as far as I can discern. Phase shift can be an important factor in speaker and crossover designs, but only because a tone whose frequency is near the crossover point is radiated by two speakers at once. In that case phase shift could cause the two acoustic outputs to partially cancel each other as they combine in the air in front of the speakers. But any time you stand in front of a loudspeaker and then simply take one step backward, you are inducing a large amount of phase shift at the higher frequencies.
I became convinced that phase shift by itself is relatively benign when I built a phase shifter outboard effect unit. These devices work by shifting the phase of a signal and then combining the original source with the shifted version, thus yielding a series of peaks and valleys in the frequency response. During the course of testing this unit, I listened to the phase-shifted output only. While the Shift knob was being turned, it was easy to hear a change in the apparent "depth" of the track being processed. But as soon as I stopped turning the knob, the sound settled in and the static phase shift became inaudible.
[Added April 11, 2004]: For even more compelling proof that phase shift alone is inaudible, see this gem I recently discovered: Some Experiments With Time
Myth: Replacing the resistors and capacitors in preamps and power amps with higher quality units can improve the sound of a system.
Fact: Unless your capacitors are defective (they allow DC current to pass through them), or have changed their value over time due to heat and other environmental factors, you are not likely to improve anything by replacing them. The same goes for replacement metal film resistors. It's true that metal film resistors have lower noise than other types, but that makes a difference only in certain critical circuits, such as the input stage of a high-gain mike preamp. It's also true that different types of capacitors are more or less suitable for different types of circuits. But if you think the designers of your amplifier or mixer are too stupid to have used appropriate components in the first place, why would the rest of the design be good enough to warrant the cost of improved parts? In fairness, extremely old gear often employs carbon composition resistors, and replacing them can make a difference in many audio circuits. But anything manufactured in the past 20 years or so will use carbon film resistors and decent capacitors.
If a mixer or mike preamp is already audibly "transparent" and its specs show nearly unmeasurable distortion with a frequency response flat from DC to light, how can it possibly be made better? Bear in mind that a distortion figure of 0.01 percent means that all of the distortion components, added together, are 80 dB below the level of the original signal! Indeed, the single best way to maintain transparency is to minimize the number of devices in the audio path.
Myth: British-designed equalizers sound better than equalizers designed by persons of other nationalities.
Fact: In a nutshell, this is racist thinking! All that should vary in a competent EQ circuit are its center frequencies, boost/cut range, and Q (bandwidth). Years ago, before parametric equalizers were commonplace in even entry-level audio gear, mixing console equalizers generally offered a limited number of fixed frequencies that were selected via switches. The Q was also fixed at whatever the designer felt was appropriate and "musical" sounding. Therefore, in those days there were audible differences in the sound of equalizer brands; one designer may have opted for a certain choice of fixed frequencies at a given Q, and another designer picked different frequencies and Q. However, there should never be an inherent "sound" to an equalizer beyond what you ask it to do to the signal passing through it. Perhaps some EQ circuits do vary in tone as they approach clipping, but sensible engineers don't normally operate at those levels. The only other possible explanation is that very old equalizers used inductor coils, and inductors can ring and add distortion. However, modern designs use op-amps and capacitors, because of the problems with real inductors (not to mention their high cost).
With midrange EQ, a low Q lets you make a big change in the sound quality with little boost or cut and without making the track sound too "affected." A high Q imparts a resonant effect to a sound. This might be useful, for example, to bring out the low tone of a snare drum by zeroing in on that one frequency and boosting the bejeezus out of it. But any decent parametric EQ can be set to a low or high Q to get those sounds. So, whatever one might describe as the sound of "British EQ" can be duplicated by any fully parametric equalizer. Indeed, I've heard it said that all the British consoles have different-sounding EQs, and some people love the sound of an SSL equalizer but hate the Trident, and vice versa. To my mind this confirms the notion that there is no such thing as British EQ: If the British console equalizers are so different, then what aspect or sound binds them together to produce a commonality called "British?"
Myth: Absolute microphone or speaker polarity makes an audible difference.
Fact: While nobody would seriously argue that it is okay to reverse the polarity of one signal in a stereo pair, I've never been able to determine that reversing the polarity of one signal - or both if stereo - ever makes an audible difference. Admittedly, it would seem that absolute polarity might make a difference in some cases, for example, when listening to a bass drum. But in practice, changing the absolute polarity has never been audible to me.
You can test this for yourself easily enough: If your console offers a polarity-reverse switch, listen to a steadily repeating bass drum hit and then flip the switch. It is not sufficient to have a drummer go into the studio and hit the drum while you listen in the control room, because every drum hit is slightly different. The only truly scientific way to compare absolute polarity is to audition a looped recording or drum sample, to guarantee that every hit is identical.
Important Update: Mike Rivers from Recording magazine sent me a test Wave file that shows absolute polarity can be audible in some circumstances. The polarity.wav file (87k) is a 20 Hz sawtooth waveform that reverses polarity in the middle. Although you can indeed hear a slight increase in the low end fullness after the transition point, I'm still not 100 percent certain what this proves. I suspect what's really being shown is a nonlinearity in the playback speaker, because with a 50 Hz sawtooth waveform there is no change in timbre. However, as Mike explained to me, it really doesn't matter why the tone changes, just that it does. And I cannot disagree with that.
More Update Info: After discussing this further with Mike in the rec.audio.pro newsgroup I created two test files you can download and audition yourself. The Kick Drum Wave file (324 KB) contains a kick drum pattern twice, with the second reversed. Play it in SoundForge or any audio editor that has a Loop mode, so you can play it continually to see if you hear a difference. The Voice Wave file (301 KB) is the same but with me speaking, because Mike says reversing polarity on a voice is surely audible. I don't hear any difference at all. However, I have very good loudspeakers in a room with proper acoustic treatment. As explained above, if your loudspeakers can't handle low frequencies properly that could account for any difference you might hear.
Yet Another Update: Loudspeaker nonlinearity can definitely change the sound when the driver cone pushes out versus pulls in. But the same thing happens in our ears as explained in this more recent article: Absolute Polarity & IM Distortion in the Ears
Myth: Replacement A/D and D/A converters can improve the sound of a professional-quality digital recorder.
Fact: I'm not going to tell you that all 16-bit pro-quality analog-to-digital converters sound identical, or that it is impossible for some - especially older ones - to have an affected sound. Perhaps there really are slight differences that some people can detect, and you may well be one of those people. However, I am not embarrassed to admit that I can't hear any change between the source and playback on my Alesis ADAT, or on my Sony PCM-2300 DAT recorder. Therefore, it makes no sense to replace what I already have, since by definition there's no way to improve upon "I can't hear any change." It becomes reduced to whether the difference, if there really is a meaningful difference, is worth the additional expense to you.
CONCLUSION
Like the Emperor's New Clothes, many people let themselves be conned into believing that a higher truth exists, even if they cannot hear it. There is no disputing that hearing can be improved with practice and that you can learn to recognize detail, but that is certainly not the same as imagining something that doesn't exist to begin with. And, logically speaking, just because a large number of people believe something does not alone make it the truth. Even more important, all the audiophile tweaks in the world are meaningless compared to such basics as installing proper acoustic treatment in the control room and using solid engineering techniques.
It is difficult to prove or disprove issues like those I have presented here because human auditory perception is so fragile and our memory is so short. With A/B testing - where you switch between one version of a signal and another to audition the difference - it is mandatory that the switch be performed very quickly. If it takes you fifteen minutes to hook up a replacement amplifier, it will be very hard to tell if there truly was a difference, compared to being able to switch between the two amps in less than a second. Even when switching quickly, it is important that both amplifiers be set to exactly the same volume level.
When all else is equal, people will generally pick the brighter (or just louder) version as sounding better, unless of course the sound already was too loud or bright. People will sometimes report a difference even in an "A/A" test, where nothing at all has changed! And just because something sounds "better," it is not necessarily higher fidelity. Goosing the treble and bass or adding a little compression often makes a track sound better, but that doesn't mean the result is more faithful to the original source material.
Psychological factors like expectation and fatigue also play an important part in one's assessment of sound, even when nothing physical has changed. If I brag to someone about how great my studio's playback system sounds and then that person comes over to hear it, my system always sounds worse to me while we're both listening. Finally, it's important to consider the source of any claim, though someone's financial interest in a product doesn't mean the claims are exaggerated or untrue either. But there's probably more than a little truth to the popular sentiment, "The most important person in a company that makes audiophile speaker wire is the head of marketing."
With special thanks to Bill Eppler.
Ethan Winer has been a professional musician, composer, audio engineer, recording instructor, computer programmer, and consultant since the 1960s. He was a writer and contributing editor for PC Magazine for many years, and has written feature articles for Electronic Musician, Recording, and R-e/p magazine. Ethan now writes mostly classical music, and plays the cello in the Danbury (Connecticut) Symphony Orchestra.
Entire contents of this web site Copyright © 1997- by Ethan Winer. All rights reserved.