Why Moore's Law Should Blow Your Mind

Written by Richard Murison

Most of you will be familiar with Moore’s Law, formulated by Gordon E. Moore, the founder of Intel, waaaaay back in 1965.  Imagine, if you can, the state of electronics components technology back then.  Integrated circuits were in their infancy, and very few people today would look at the first-ever 1961 Fairchild IC and recognize it as such.  In 1965, commercial ICs comprised up to a few hundred transistors.  That was the state-of-the-art when Moore formulated his law which states that the number of transistors in an IC will double every two years.  Considering the infancy of the industry at the time Moore made his prediction, it is astonishing that his law continues to hold up.  Today, the biggest commercial ICs have transistor counts in the billions.

Also, every ten years or so, sage observers can be counted on to pronounce that Moore’s Law is bound to slow down over the coming decade due to [fill-in-the-blanks] technology limitations.  I can recall at least two such major movements, one in the early 1990’s, and again about 10 years later.  The movers and shakers in the global electronics industry, however, continue to base their long-range planning on the inexorable progress of Moore’s Law, which continues to hold up remarkably well.

A couple of years back I attended a profoundly illuminating talk given by John La Grou, CEO of Millennia Media.  John showed how Moore’s law extends to a number of core technologies that relate to the electronics industry, and touched on the mechanisms that underly these developments.  However, what was most impressive was how he expressed the dry concepts such as transistor counts in more meaningful terms.  The one which particularly caught my attention was a chart that expressed the related growth in computer power.  Its Y-axis has units expressed as brainpower of a flea, the brainpower of a rat, the brainpower of a human, and the combined brainpower of all humans on earth.  In his chart, today’s CPU has slightly more than the brainpower of a rat, but falls massively short of the brainpower of a human.  However, by 2050, which will be within the lifetimes of many of you reading this, your average computer workstation will be powered by something approaching the combined brainpower of every human being on earth.

I wonder if, back in 1965, Gordon Moore ever paused to imagine the practical consequences of his law.  I wonder if he contemplated the possibility of having a 2014 Mac Pro on his office desk, a computer possessed of processing power equivalent to the sum total of every computer ever built up to the time Apple introduced their first ever PC.  Now Moore was a smart guy, so I’m sure he did the math, but if he did, I wonder if he ever asked himself what a person might actually DO with such a thing.  He must have done, but posterity doesn’t record his conclusions.  In the same way, I wonder what a person might do in 2050 with a computer having at its disposal the combined brainpower of every human being on the planet.  Posterity is not likely to record my conclusions either.  But John La Grou has given a lot of thought to these matters, and he stood before us to discuss them.

La Grou’s talk focused on audio-related applications.  In particular he talked about what he referred to as immersive applications.  In effect, wearable technology that would immerse the wearer in a virtual world of video and audio content.  Specifically for the audience at hand, he foresaw recording studio technology being driven by VR in directions that evoked the Sci-Fi movie Minority Report.  You want more channels in your mixing desk? … with a hand gesture you just swipe them in!  He was very clear indeed that the technology roadmaps being followed by the industry would bring about the ability to achieve those goals within a remarkably short period of time.  And indeed, such tools are in the well advanced stages of active development.

La Grou talked about 3D video technology with resolution indistinguishable from reality, and audio content to match.  He was very clear that he did not think he was stretching the truth in any way to make these projections, and expressed a personal conviction that these things would actually come to fruition quite a lot faster than the already aggressive timescales he was presenting to the audience.  He showed some really cool video footage of unsuspecting subjects trying out Occulus Rift virtual reality headsets (by coincidence, the acquisition of Occulus Rift by FaceBook was announced on the very day of his talk).  I won’t attempt to describe it, but we watched people who could no longer stand upright.  La Grou has tried the Occulus Rift himself and spoke of its alarmingly convincing immersive experience.

At the start of La Grou’s talk, he played what he described as the first ever audio recording, made by a Frenchman some 30 years before Edison.  Using an approach similar to Edison’s, this recording was made by a needle which scratched the captured waveform on a moving piece of inked paper.  This recording was made without the expectation that it would ever be replayed; in fact the object was never to listen to the recorded sound, but rather to examine the resultant waveforms under a microscope.  But today, by digitizing the images, we can easily replay that recording, more than 150 years after the fact.  And when you do so, you can hear the Frenchman humming rather tunelessly over a colossal background noise level.  One imagines he never rehearsed his performance, or even paused to consider what he might attempt to record for that momentous occasion.  Consequently, history’s first ever recorded sound is a man humming tunelessly.

At the end of La Grou’s talk we watched the results of an experiment where researchers were imaging the brains of subjects while they were watching movies and other visual stimuli.  They confined themselves to imaging only the visual cortex.  In doing so, they could find no observable pattern in how particular images caused the various regions within the cortex to illuminate.  But today’s computers being the powerful things they are (i.e. smarter than the average rat), they let the computer attempt to correlate the images being observed with the patterns being produced using AI (Artificial Intelligence) techniques.  If I understand correctly, they then showed the subjects some quite unrelated images, and asked the computer to come up with a best guess for what the subject was seeing, based on the correlations previously established.  There is no doubt that the images produced by the computer corresponded remarkably well with the images that the subject was looking at.  In fact, the computer made as good a reproduction of the image that the subject was looking at as the playback of the 150-year old French recording did of the original tuneless hum.

I couldn’t help but think that it will be something less than – quite a lot less than – 150 years before this kind of technology advances to a practically useful level, one with literally mind-bending ramifications.  Already, researchers have imaged the brains of rats, while they (the rats, that is, not the researchers …) were learning to find their way through a maze.  Then, later, while the rats were sleeping, they clearly demonstrated that the rats’ brains were replaying their routes through the maze while they were dreaming.  In other words, this technology has already moved beyond the demonstration stage to the first stabs at deployment as a behavioral research tool.  Amazing, really.

And no sooner had I written the above, than I came across something even more remarkable.  Researchers doing work with rhesus macaque monkeys (which, apparently, have a capacity similar to humans to recognize faces) measured activity from various brain locations while the monkey was shown photographs of different human faces.  From these observations they were able to make some key determinations of how specific facial features register in the monkey’s brain.  Consequently, they are able to reconstruct a face that the monkey is seeing by monitoring the electrical activity of only 205 neurons in the monkey’s brain.  Below left is an actual facial image that was shown to the monkey, and on the right is a facial image reconstructed from the brain measurements.  Pretty darned incredible.

Back to Copper home page