The Vector DAC
In Wednesday's post Ignorance is Bliss, I wrote a long post about a bit of high-end history covering the invention of the separate DAC for high-end audio. I promised another about a future idea that might be interesting. First, a primer.
A DAC is a digital to analog converter - its name describing exactly what it does. The beginning of this chain is called an A to D converter (analog to digital) which does the opposite. The idea is to convert the audio (continuous stream) from a microphone into something a computer can understand: bits. To do this, the continuous audio is broken down into particles or discrete quanta that are then stored by optical means (CD) or magnetic means (hard drive). The DAC reverses this process.
Digital audio has limits - depending on the number of bits. This has always been a problem for me because in real life there are no limits: sounds can get as loud or soft as they do without restriction.
Some of my colleagues will argue that these limits are meaningless since 24 bit audio has a dynamic range of 144 dB which far exceeds analog and, for that matter, human hearing (and 32 bit audio even higher).
While the numbers are correct, I would argue that most of the dynamic range available is unusable because it is far below what we can hear - and the usable range is not much more than our ability to hear. Certainly we could manage to use what dynamic range we have available better, but currently that's not the case.
So for a moment let's imagine a digital audio scheme that has no practical limits in loudness. A system that if we had a microphone that could capture the quietest sounds to the loudest sounds (we don't) we could record it and play it back.
What I like to call the "Vector DAC" is such a system with basically unlimited dynamic range and could be compressed or expanded without degradation. The idea for it came to me through photography.
All photography (digital or film) is much like today's DACs - based on discrete quanta or bits. In film it is called grain (actual grains of silver) and in digital photography it's called pixels. Look too closely and what you see is not a picture, but bits (like Leggos) that when viewed from a distance fool us into believing they are smooth and continuous.
You cannot scale a photo up or down without degrading the original because the bits get messed up. Same with audio - compressed you lose and expanded you lose.
Then we learned about vector based imagery. In the late 1980's a company called Adobe introduced a program called Illustrator and this is where many of us first learned vector graphics. Unlike pixel based systems vector graphics can be scaled up or down without degradation of any kind - bigger or smaller - it's all the same.
Vectors work by the computer recording a vector, which is a meeting point including angle and length - basically a mathematical description of a line. The line is going in this direction (angle), and continues for a certain length. Using this plus a few other parameters we can describe just about anything - color, width, even speed - and you have a complete scalable model of something.
Why not apply this to audio? After all, an analog signal can be represented by the angle, duration and speed of its movement and if you know all that, you know where it is at any one time. There are no bits or discrete quanta that cannot be scaled.
In the Vector DAC idea we simply record the vectors and related data which are completely scalable without degradation.
I think this would be a real breakthrough. Now, if only we had the time and resources.
- Choosing a selection results in a full page refresh.
- Opens in a new window.