I rediscovered a great article which talked about the mosquito (or near ultrasonic) range of frequencies young people use for their ringtones as to not alert “adults” should their phones “ring” in class.
A question I’ve always had was *when* to agree that a certain frequency range *can* or *cannot* be heard – mainly because they never mentioned at what volume level, or what type of output medium was used.
Granted, the article mentioned that even cellphones should be capable of playing up to the standard *20kHz* audio range, we have to take into account that they haven’t quantifiably explained how much attenuation on the `db` level would be applied compared to say, a high fidelity, properly amplified speaker. If cellphones can play near ultrasonic frequencies, naturally laptops can too. But it doesn’t take a rocket scientist to know that even on *regular* musical frequencies, laptops lack volume compared to real speakers – no matter how “fresh” a person’s ear is.
I mention all of this because I can hear up to 17kHz quite fine with the laptop . With the *19kHz* sample, it’s almost inaudible with the laptop – so much so that I can only hear it properly when putting my ear *really close* to the speakers (like inches away). It’s easier to hear using regular speakers though, but still requires head somewhat close to them. This leads me to the question that though my ears aren’t that fresh anymore, does the fact that I can still hear them on a special setup count? Or should they sound loud an clear at *any* output media? If a person indeed suffered from *presbycusis,* and say *16kHz* “erased” out of him was already a given… do they hear it *at all* using *any* type of output media (amplified or otherwise)?
—
The article also mentioned this in passing:
> The practical limits for a suitable audio bandwidth for music were established long before the Compact Disc system came out, and the conventional 20-kHz upper limit (easily reached by the CD format) is quite wide enough, even for teenagers. In fact, the actual highest possible frequency on a CD is 22.05 kHz, which is almost a half-octave higher than 17 kHz. Extending it further as done by DVD-Audio, SACD, and other so-called high-resolution audio technologies, has never made any engineering or musical sense.
I couldn’t help but be reminded of a debate between Compact Disc vs. Vinyl *waaaaay* back. Basically, there were purists who said that CD could never match the quality of Vinyl… or that essentially Vinyl was *better.* Mind you, “better” should be qualified to a subjective opinion (as in *sounds* better), but we all know now that in terms of fidelity, this is a ridiculous claim.
For those who like the sound of vinyl, they don’t sound better because the compact disc system is inferior, quite the opposite in fact. We as humans tend to like warmer, *dirtier* sounds… “colored” sounds if you will. The colorization of audio adds to the music’s “feel.” I guess a somewhat reasonable analogy is the fact that we use effects in recording in general. The use of reverb for example. or the whole fact that compression and equalization takes place. *All* instruments (vocal or otherwise) have a fundamental set of frequencies that a CD can accurately represent, but what the human ear has been accustomed to is those frequencies in relation to the environment they were heard in. If you take away all that “noise” that we’ve always taken for granted, we’ll probably get a considerably different sound from what we have always known to be “real.”
The imperfections of older recording media (e.g. vinyl) introduce colorization of what is *supposed to be* the pure source – which incidentally sounds pleasing to the ear. It is subjectively different compared to what is perceived to be the “sterility” of the CD system, which is in fact, the more faithful representation.
In short, there’s nothing wrong with the CD system, nor is it lacking in any way, as the quoted line above states. It’s just the fact that it cannot distort/color audio on its own. A CD can sound like a record if you want it to, just run the audio through some effect that simulates that particular analog “colorization” – which as of now, ironically, is the weakness of digital processing. There are a lot of software that do “analog” filtering but of course they’re still not quite the real thing. After all, from a purely objective/technical perspective, you’re ultimately trying to **re-introduce** “noise” that should’ve been *eliminated* in the first place. But, just like in digital photography, there will come a time when digital media will be able to copy their analog predecessors **to the dot**… and *much more.*
I’m not saying that colorization is bad though, it is true that those “inconsistencies” in the audio are the very things that make a sound, sound *real* Those inconsistencies are what make it possible to tell a real string section from a *really good* software synthesizer’s string patch. And while ironic, it *is* a welcome thing that people are reintroducing the analog aspects into the digital era.
Related