>>> context weblog
sampling new cultural context
| home | site map | about context | donate | lang >>> español - català |
friday :: august 4, 2006
   
 
auditory code: how brain processes sound

Scientists at Carnegie Mellon University have discovered that our ears use the most efficient way to process the sounds we hear. These results represent a significant advance in understanding how sound is encoded for transmission to the brain.

The research provides a new mathematical framework for understanding sound processing and suggests that our hearing is highly optimized in terms of signal coding —the process by which sounds are translated into information by our brains— for the range of sounds we experience. The same work also has far-reaching, long-term technological implications, such as providing a predictive model to vastly improve signal processing for better-quality compressed digital audio files and designing brain-like codes for cochlear implants, which restore hearing to the deaf.

To achieve their results, the researchers took a radically different approach to analyzing how the brain processes sound signals. Abstracting from the neural code at the auditory nerve, they represented sound as a discrete set of time points, or a "spike code," in which acoustic components are represented only in terms of their temporal relationship with each other. That's because the intensity and basic frequency of a given feature are essentially "kernalized," or compressed mathematically, into a single spike. This is similar to a player piano roll that can reproduce any song by recording what note to press when the spike code encodes any natural sound in terms of the precise timings of the elemental acoustic features. Remarkably, when the researchers derived the optimal set of features for natural sounds, they corresponded exactly to the patterns observed by neurophysiologists in the auditory nerves.

"We've found that timing of just a sparse number of spikes actually encodes the whole range of nature sounds, including components of speech such as vowels and consonants, and natural environment sounds like footsteps in a forest or a flowing stream," said Michael Lewicki, associate professor of computer science at Carnegie Mellon and a member of the Center for the Neural Basis of Cognition (CNBC). "We found that the optimal code for natural sounds is the same as that for speech. Oddly enough, cats share our own optimal auditory code for the English language."

"Our work is the only research to date that efficiently processes auditory code as kernalized spikes," said Evan Smith, a graduate student in psychology at the CNBC.

Until now, scientists and engineers have relied on Fourier transformations —initially discovered 200 years ago— to separate and reconstitute parameters like frequency and intensity as part of traditional sound signal processing.

Smith and Lewicki's approach dissects sound based only on the timing of compressed "spikes" associated with vowels (like cat vocalizations), consonants (like rocks hitting one another) and sibilants (ambient noise).

The authors' research combines computer science, psychology, neuroscience and mathematics. >from *Carnegie Mellon Scientists Show How Brain Processes Sound*. Landmark Results Could Improve Devices from iPods to Cochlear Implants. February 23, 2006

related context
>
sound-analysis breakthrough. extremely high-resolution time-frequency analysis. july 26, 2006
> brain frequency map. researchers map out numerous areas in the brain where sound frequencies are processed. june 22, 2006
> sound of silence activates auditory cortex. auditory imagery is the subjective experience of hearing in the absence of auditory stimulation. 2005
> sonification. data sonification is becoming one of the most promising analysis tools, since sounds can summarize significant amounts of information and can be characterized, stored and studied in a simpler and easier way with respect to other data representations. november 25, 2005
> how we hear. discovered how tiny cells in the inner ear change sound into an electrical signal the brain can understand. may 7, 2002

imago
>
auditory-protective follie

| permaLink






> context weblog archive
december 2006
november 2006
october 2006
september 2006
august 2006
july 2006
june 2006
may 2006
april 2006
march 2006
february 2006
january 2006
december 2005
november 2005
october 2005
september 2005
august 2005
july 2005
june 2005
may 2005
april 2005
march 2005
february 2005
january 2005
december 2004
november 2004
october 2004
september 2004
august 2004
july 2004
june 2004
may 2004
april 2004
march 2004
february 2004
january 2004
december 2003
november 2003
october 2003
june 2003
may 2003
april 2003
march 2003
february 2003
january 2003
december 2002
november 2002
october 2002
july 2002
june 2002
may 2002
april 2002
march 2002
february 2002
january 2002
countdown 2002
december 2001
november 2001
october 2001
september 2001
august 2001

more news in
> sitemap

Google


context archives all www
   "active, informed citizen participation is the key to shaping the network society. a new 'public sphere' is required." seattle statement
| home | site map | about context | donate | lang >>> español - català |
03 http://straddle3.net/context/03/en/2006_08_04.html