The Problem With A-B’ing And Why Neil Young Is Right About Sound Quality


Great Tape Op post that’s thinking big about audio, music, and hearing.

The main crutch of the good enough team is what is called the double-blind listening test (shortened to ABX). When doing studies based on perception, it is the great measuring stick, and perhaps the only way they can start to squeeze some numbers out of human sensory perception.

It’s basic – here’s source A, here’s source B, maybe switch back and forth a couple of times, now make your decision. Which one was better? Can you hear a difference? Do you like one better than the other?

But as the article states, every ABX test is flawed because of it’s short sample time, and building out theories on these short ‘taste-test’ findings has led us to this mess of bad science and bad assumptions.

Since we live with and love music in intimate ways we cannot accurately write or describe, the author proposes that for any “double blind” tests to be valid the subjects should actually get to keep and live with their music collection for a month or two, then report their feelings towards it.

Much like how a sugary treat tastes better than anything next to it, but if you lived on sugary treats all month you would be feeling much worse than the person with the quality diet. Often the lesser files are close enough on initial inspection to fool enough people, and the ABX test stops right there. No one is doing long-term ABX tests, we all are doing taste tests, not nutrition tests.

Neil Young and the high-def audio movement is about getting the nutrition back into your music. There’s industrial white bread, and then there’s all those other breads. They both hold the sandwich together but living off the nutrition inside of it leads us to different outcomes.


Resolution, Not Frequency Range

Anyone arguing about audio and getting stuck on the overall hearing range of humans is actually missing the point.

What digital audio has really been doing is giving us lower resolutions on the sounds we can hear.

Have you ever had a car radio with a dial that won’t go to the exact volume you want? The ‘chunks’ are too big to get it exactly where you want it? That’s a lack of resolution in that volume knob. Put that lack of resolution throughout every part of the audio program and the overall effect is perhaps not easily heard, but it seems to be easily felt. – Excerpt From Save The Audio

HD audio is really about the resolution, not the frequency range. The color’s won’t be brighter, there will just be more of them available. Having more available means you leave the computer to guess about less.

The whole “no one can hear above blah blaah” is just a diversion from the fact that we can all hear and do indeed miss what the computers have been removing from our music.


Digital Audio Verses Timbre

If an electric guitar and a piano both play a C major chord at the same volume, can you tell the difference between them?  Would you be able to discern a difference between the guitar and the piano’s version of the same note? Digital audio programmers hope you can’t.

If a violin and a plastic keyboard both hit a D and hold it out, can you hear a difference between them? What are the differences between the violin and the electronic keyboard when the result is the same note? Digital audio programmers hope you don’t know or care.

If you recorded the two tests above and played them back, would you still be able to hear a difference between them? Of course you would, but the more you degrade the digital audio by compressing in a ‘lossy’ format, the differences between the two would diminish. Somewhere around 128k lossy you’d have trouble hearing any difference between the instruments, even if in different families all together.

So how exactly do you tell the differences between the instruments, and how well they are played? We don’t even have words to describe all of what is happening there. But you can hear the difference even if the computer just sees the frequency and the volume. Most of this familiarity as to “what is making that sound” is put under the term timbre, and then most of it is thrown out in the digital realm.

Timbre is where they go looking for things to LOSE when compressing digital audio. Why do you care if it’s a piano or strings, you hear the note, you get the point, right?  The timbre is what many like Neil Young talk about as being part of the ‘soul’ of music, unquantifiable and very emotional for each person.

Lossy media compressions were developed for dial-up modems (remember those?), and to shrink the file by 80% they actually threw out most of the timbre, most of the sub-lows,  most of the highs, and most of the steps for panning and depth. Part of what you hear as mp3 artifacts are all those holes in the timbre being filled with wrong data.

BTW — the cover image is a microscopic view of an actual groove in a record. Look at the amount of vibration data the stylus picks up as it drags through that groove. 16 bits is just not enough data space to recreate all of that.


1 Trillion Odors, or alot of Funk


Ha, Imagine That!  I’m running all over the internet fighting bad science about hearing and music, and The Journal of Science publishes a study that says scientists have really underestimated the abilities of our nose and sense of smell.

Oh those crazy scientists, always learning more about our senses. Always so amazed at what the human body and brain can do. Sometime Simpleton.

This mirrors what is happening in the audio world. I really do think we will look back at the days (decades) of claiming “humans can’t actually perceive anything beyond 16/44 digital files” as the ignorant dark ages of hearing science. Producers and musicians have been ignored and derided in the name of digital convenience for many years now.

All it takes is one scientific paper to state something about how we can sense all kinds of other tones, timbres, and frequencies throughout our bodies, and how when receiving the full spectrum of audio, human bodies react positively. Familiarity is the first stage of listening, but we must go further than that for actual enjoyment.

But that’s not science, is it? It’s just a reality that is hard to quantize.



Bad Science + Business Interests = Trouble

Computer geeks know lots of things. The sheer breadth of stuff that geeks have crammed in their head is impressive.

But their major mistake is often not acknowledging their own ignorance. Many have come up in a world so digitally driven that they forget they are analog animals.

They forget sound, light, smell, touch are all analog. These are things computers don’t do natively.

In fact it has taken 40+ years of digital advancement to even start competing with original (analog) methods of creation.


Hi there I’m analog

Most computer nerds know nothing about professional media production. They might know the basics or have clicked around a bit with an app, but they know nothing of producing high quality media for a living.

On the other hand, most producers these days have to know their computers, especially the parts critical to creating professional media. I believe some nerds don’t like the competition so they declare themselves experts on everything digital.

Experts are the people that do it for a living, not people tasked with spreading false information on the internet.

A computer programmer/nerd believes there is a digital solution to everything.

Then they build on this bad foundation the fatal flaw of believing a digital copy of something analog will somehow be superior. Many sub-measurements of that digital file might be superior to the analog, but remember to always step back and say “what is this trying to solve?”.

Music is created to get an emotional response from us and that requires as much audio data as possible.


All consumer digital music, from the CD in 1978 on, has been a compromise. When you hear analog playback you are hearing a reflection of the recording, that is, an analog copy that is slightly degraded but overall intact and whole.

The original sounds hit the microphone in analog and it will hit your ears in analog.  It has not been broken up and re-assembled, and no computer decided what to keep and what to throw out.

Nature does degrade the signal to a certain extent (magnetism in a tape or physical dragging movement on vinyl), but no programmer had to determine mathematically what parts of your music to throw out.

Computer nerds trust in the computer to decide what’s important in our audio signal, more than they trust their own intuition or senses.

Computers don’t have skin, hair, ears, or emotions, so what do they know about music? Nothing. Nada.

Programmers with agendas are behind much of this nonsense, and it is all based on a total misunderstanding of how we hear, and what we actually get from music.

Familiarity is just step 1. “I can recognize that song I like!” is not the same as hearing the whole thing the way it was intended.

Check out this cool article about a guy that helped design the Pono Player.