If an electric guitar and a piano both play a C major chord at the same volume, can you tell the difference between them? Would you be able to discern a difference between the guitar and the piano’s version of the same note?
Digital audio programmers hope you can’t.
If a violin and a plastic keyboard both hit a D and hold it out, can you hear a difference between them? What are the differences between the violin and the electronic keyboard when the result is the same note?
Digital audio programmers hope you don’t know or care.
If you recorded the two tests above and played them back, would you still be able to hear a difference between them?
Of course you would, but the more you degrade the digital audio by compressing in a ‘lossy’ format, the differences between the two would diminish. Somewhere around 128k lossy you’d have trouble hearing any difference between the instruments, even if in different families all together.
So how exactly do you tell the differences between the instruments, and how well they are played? We don’t even have words to describe all of what is happening there.
But you can hear the difference even if the computer just sees the frequency and the volume. Most of this familiarity as to “what is making that sound” is put under the term timbre, and then most of it is thrown out in the digital realm.
Timbre is where they go looking for things to LOSE when compressing digital audio. Why do you care if it’s a piano or strings, you hear the note, you get the point, right?
The timbre is what many like Neil Young talk about as being part of the ‘soul’ of music, unquantifiable and very emotional for each person.
Lossy media compressions were developed for dial-up modems (remember those?), and to shrink the file by 80% they actually threw out most of the timbre, most of the sub-lows, most of the highs, and most of the steps for panning and depth.
Part of what you hear as mp3 artifacts are all those holes in the timbre being filled with wrong data.
BTW — the cover image is a microscopic view of an actual groove in a record. Look at the amount of vibration data the stylus picks up as it drags through that groove. 16 bits is just not enough data space to recreate all of that.