Listening to Light


"All that's to come
and everything under
the sun is in tune
but the sun
is eclipsed by the moon."

Roger Waters
from "Eclipse" on the 1973 LP release "Dark Side of the Moon"

For generations, humans have been trying to link sound and light together. In 1704, Sir Isaac Newton, one of humankind's most celebrated thinkers, put forth an idea that the seven colors of light that a prism splits white light into (red, orange, yellow, green, blue, indigo, and violet) are related to the seven musical tones of the diatonic scale. To the non-musician, these are all the white keys on a piano. Besides being next to each other (most a whole step away, two of them a half-step), the notes have a harmonic relationship to each other. Starting on the F key, they are all a perfect fifth away from each other (F-C-G-D-A-E-B), and are part of music's Circle of Fifths.

Newton assigned these notes to opposite colors on a circle of colors in a convincing way, thereby beginning a fundamental 18th century theory of universal harmony for scientists and musicians. This theory was so well regarded, that French mathematician Louis Bertrand Castel invented a modified harpsichord, popularly called the "ocular harpsichord" or "color organ," that employed seven different colored lamps that were exposed for corresponding notes and harmonies. A "harpsichord for the eyes." While the theory was pooh-poohed by some, Baroque composer Georg Phiilipp Telemann wrote several pieces for it. The theory was eventually discredited, but not before musicians, artists, and writers embraced this new "visual music."

In 1880, Alexander Graham Bell and Charles Tainter sent the human voice over a beam of light using their invention called a photophone. This technology would lead to fiber-optics a century later. A few decades later, German physicist Ernst Ruhmer realized he could expose the fluctuations from the photophone's arc-light onto a moving roll of film. By reversing the procedure and shining a light through the moving roll of developed film onto a selenium cell, that original sound would be produced. This method would be the anchor technology of motion picture sound for nearly a hundred years.

Early attempts at synchronizing sound and visuals on film were clumsy and inexact. These mostly involved cutting a disc or cylinder while filming. Synchronized playback of a disc with motion pictures was next to impossible. The obstacle was finally overcome by using light to record sound onto a separate, but synchronized sound film recorder. For the theater release, the final film had the edge around the sprockets of a motion picture exposed with an optical soundtrack. Because the sound and picture resided on the same medium, the sync problems of two separate technologies were history. Today, presentation and archival 35mm motion picture prints still include an optical soundtrack, both analog and digital.

The next frontier for the harmony of light and sound is in data. We are now able to store light as sound. Current computer technology is "electronic" and based on the electron, which moves much slower than a photon, a particle of light. The next generation of computers will be based on light. We can already move data at light speed (fiber-optics), but we just can't store or process photons using tortoise-speed electron technology. That's why we slow the photonic data in fiber-optic channels way down to a pokey electron speed. New light-based computers will process data at near light-speed. Here's how it works: Photonic information, basically a pulse of photons traveling like a freight train, passes into a photonic microchip one direction through a maze-like path designed to lengthen its travel time. Those photons collide in the middle with a separate "write" pulse of photons sent into the maze from the opposite direction. This train wreck at the center of the chip produces a small acoustic wave in the microchip. As the chip is still vibrating like a bell (less than 10 nanoseconds later), a separate "read" photon pulse is sent into the chip. This pulse passes through the still vibrating chip, is converted back to photonic data, and directed out of the chip to be processed. Though the photonic data is slowed down to below light speed, computers utilizing photonic circuitry will be 20 times faster than what can be achieved now with electronic computers.

And finally, lasers are changing the way hearing aids operate. Amplification of a sound wave can be inefficient and problematic with traditional hearing aids. Scientists know that certain types of light can hit a surface, have their photon energy absorbed and converted to mechanical waves, which then generate sound waves. Researchers found that they could stimulate the inner ear with a laser, which then produced electrical impulses that were sent to the brain and interpreted as sound. By taking advantage of the fact that sound waves move much slower than light, a sound wave is captured by this new device and converted to laser pulses as the original sound hits the ear canal. This eliminates the feedback, distortion, and other problems with acoustic-amplified hearing aids. While still in development, these new "laser" hearing aids could change the world for aging rock-and-rollers.

UA-25904086-1 GSN-354889-D