Archive for the ‘T is for Twenty-One Senses’ Category

BrainDriver: Thoughts Drive A Car

Saturday, February 19th, 2011

Image by Raúl Rojas/Freie Universität Berlin

Imagine you could drive your car using only your thoughts. Raúl Rojas a professor of Artificial Intelligence at the Freie Universität Berlin and his team have demonstrated how a driver can use a brain interface to steer a vehicle. Dubbed the ‘BrainDriver’ project, it is part of a larger research project into autonomous driving, which they call “MadeInGermany.”

To record brain activity, the researchers use an Emotiv “neuroheadset,” an electroencephalography, or EEG, sensor by San Francisco-based company Emotiv, which design it for gaming. After a few rounds of “mental training,” the driver learns to move virtual objects only by thinking. Each action corresponds to a different brain activity pattern, and the BrainDriver software associates the patterns to specific commands—turn left, turn right, accelerate, etc. The researchers then feed these commands to the drive-by-wire system of the vehicle, a modified Volkswagen Passat Variant 3c. Now the driver’s thoughts can control the engine, brakes, and steering.

The researchers caution that the BrainDriver application is still a demonstration and is not ready for the road. But they say that future human-machine interfaces like this have huge potential to improve driving, especially in combination with autonomous vehicles. As an example, they mention an autonomous cab ride, where the passenger could decide, only by thinking, which route to take when more than one possibility exist.

This type of non-invasive brain interface could also allow disabled and paralyzed people to gain more mobility in the future, similarly to what is already happening in applications such as robotic exoskeletons and mind-controlled prosthetics.

Watch the researchers road test of the brain-controlled car on YouTube.

Via IEEE Spectrum

The Human Filter

Tuesday, October 12th, 2010

Game designer Will Wright tells SPTNK that a remarkable amount of our intelligence is actually ignoring the world intelligently. The trick, is to program the processor in your imagination.

Listen here: The Human Filter

People See Shape of Time

Tuesday, April 6th, 2010

Photo via flickr by h.koppdelaney

As Sci-fi fans know, Time Lords—time-traveling humanoids with the ability to understand and perceive events throughout time and space—exist, but according to David Brang of the department of psychology at the University of California, San Diego, they really do walk among us. They are members of an elite group with the power to perceive the geography of time, a newly found category of ‘time-space synesthetes’ who experience time as a spacial construct.

Synesthesia is the condition in which the senses are mixed, so for example, a sound or a number has a color. Brang suspects that ‘time-space synesthesia’ happens when the neural processes underlying spatial processing are unusually active. In other words, they ‘see time.’

“In general, these individuals perceive months of the year in circular shapes, usually just as an image inside their mind’s eye,” states Brang. “These calendars occur in almost any possible shape, and many of the synaesthetes actually experience the calendar projected out into the real world.”

Brang and colleagues recruited 183 students and asked them to visualize the months of the year and construct this representation on a computer screen. Four months later the students were shown a blank screen and asked to select a position for each of the months. Uncannily, four of the 183 students were found to be time-space synaesthetes when they placed their months in a distinct spatial array—such as a circle—that was consistent over the trials.

One of Brang’s subjects was able to see the year as a circular ring surrounding her body. The “ring” rotated clockwise throughout the year so that the current month was always inside her chest with the previous month right in front of her chest.

Brang did not speculate on whether time-space synesthetes could regenerate, or if they have two hearts: both key characteristics of Time Lords.

via KurzweilAI and New Scientist

In a 2002 interview, Hinderk Emrich, Chair of the Department of Psychiatry at the Hannover Medical School, discussed with Sputnik Observatory the possibility our sensorial evolution into synesthetes:

We have the evolution of biological systems. This is an evolution by Darwinian processes, by mutations, and so on. You have the evolution of brain, of course. The brain has this encephalization that means extremely growth of cortical structures. But we have not only these two types of evolution. We have also the cultural evolution, that means that in the past of about 10,000 years, enormous material of knowledge has evolved and we have to cope with this knowledge. And our brains have to cope with all of these fantastic possibilities of cognition. For example, it is very difficult for mathematicians to cope with the problem: how can I visualize the four-dimensional or five dimensional space? And synesthetes could, since they live in a hyperreality, a more complex reality, could internally represent more complex realities. So possibly synesthesia is a mode of cognitional evolution of mind. This is a speculation, we don’t know it, but in some regards synesthetes are higher, they have a more pronounced capacity of memorizing and they have very often mathematical abilities. And they can visualize complex realities so it is possible that synesthesia is one mode of the evolution of senses.

Tactile Hearing

Thursday, January 28th, 2010

Photo via flickr by Mat_the_W

Humans use their whole bodies, not just their ears, to understand speech, according to University of British Columbia linguistics research.

It is well known that humans naturally process facial expression along with what is being heard to fully understand what is being communicated. The UBC study is the first to show we also naturally process tactile information to perceive sounds of speech.

Prof. Bryan Gick of UBC’s Dept. of Linguistics, along with PhD student Donald Derrick, found that air puffs directed at skin can bias perception of spoken syllables. “This study suggests we are much better at using tactile information than was previously thought,” says Gick, also a member of Haskins Laboratories, an affiliate of Yale University.

The study, published in Nature, offers findings that may be applied to telecommunications, speech science and hearing aid technology.

English speakers use aspiration—the tiny bursts of breath accompanying speech sounds—to distinguish sounds such as “pa” and “ta” from unaspirated sounds such as “ba” and “da.” Study participants heard eight repetitions of these four syllables while inaudible air puffs—simulating aspiration—were directed at the back of the hand or the neck.

When the subjects—66 men and women—were asked to distinguish the syllables, it was found that syllables heard simultaneously with air puffs were more likely to be perceived as aspirated, causing the subjects to mishear “ba” as the aspirated “pa” and “da” as the aspirated “ta.” The brain associated the air puffs felt on skin with aspirated syllables, interfering with perception of what was actually heard.

“Our study shows we can do the same with our skin, “hearing” a puff of air, regardless of whether it got to our brains through our ears or our skin,” says Gick.

Future research may include studies of how audio, visual and tactile information interact to form the basis of a new multi-sensory speech perception paradigm.

via Science Daily

What We Desire, We See Closer

Friday, January 22nd, 2010

Photo via flickr by Desirée Delgado

We assume that we see things as they really are. But according to a new report in Psychological Science, if we really want something, that desire may influence how we view our surroundings.

Psychological scientists Emily Balcetis from New York University and David Dunning from Cornell University conducted a set of studies to see how our desires affect perception. In the first experiment, participants had to estimate how far a water bottle was from where they were sitting. Half of the volunteers were allowed to drink water before the experiment, while the others ate salty pretzels, thus becoming very thirsty. The results showed that the thirsty volunteers estimated the water as being closer to them than volunteers who drank water earlier.

Our desire for certain objects may also result in behavioral changes. In a separate experiment, volunteers tossed a beanbag towards a gift card (worth either $25 or $0) on the floor, winning the card if the beanbag landed on it. Interestingly, the volunteers threw the beanbag much farther if the gift card was worth $0 than if it was worth $25—that is, they underthrew the beanbag when attempting to win a $25 gift card, because they viewed that gift card as being closer to them.

These findings indicate that when we want something, we actually view it as being physically close to us. The authors suggest that “these biases arise in order to encourage perceivers to engage in behaviors leading to the acquisition of the object.” In other words, when we see a goal as being close to us (literally within our reach), it motivates us to keep on going to successfully attain it.

via ScienceDaily

The Magic of Mind

Saturday, December 26th, 2009

Photo via flickr by Growing Minds

For over a millennium, mankind has dreamed of the ability to control objects with the power of thought.

Inside the recent edition of H+ magazine, is an advertisement for Emotiv, a human-computer interface system based on the latest developments in neurotechnology that uses electric signals produced by the brain to wirelessly detect player thoughts, feelings and expressions while gaming.

The Emotiv website claims: “Fulfill the fantasy of having supernatural powers controlling the world with your mind!”

Emotiv Systems, headquartered in San Francisco, California, was founded in 2003 by four award-winning scientists and executives: internationally recognized neuroscientist Professor Allan Snyder, chip-design pioneer Neil Weste, and technology entrepreneurs Tan Le and Nam Do.

While Emotiv is currently focusing on the electronic gaming industry, the applications for the Emotiv EPOC™ technology and interface span an amazing variety of potential industries— interactive television, accessibility design, market research, medicine, even security.

Birth of Psychedelic Culture

Saturday, December 19th, 2009

Photo via flickr by oh snap

The latest alert from Barlowfriendz, care of John Perry Barlow, Peripheral Visionary; Managing Partner, Algae Systems, and Co-Founder and Rocking Chair at Electronic Frontier Foundation, announces the book release of Birth of A Psychedelic Culture—Conversations about Leary, the Harvard Experiments, Millbrook and the Sixties, by Ram Dass and Ralph Metzgner, from Synergetic Press. The book’s Forward was written by Barlow and can be downloaded here.

Following is an excerpt:

“It’s now almost half a century since that day in September 1961 when a mysterious fellow named Michael Hollingshead made an appointment to meet Professor Timothy Leary over lunch at the Harvard Faculty Club.When they met in the foyer, Hollingshead was carrying with him a quart jar of sugar paste into which he had infused a gram of Sandoz LSD. He had smeared this goo all over his own increasingly abstract consciousness and it still contained, by his own reckoning, 4,975 strong (200 mcg) doses of LSD. And the mouth of that jar became perhaps the most significant of the fumaroles from which the ‘60s blew forth.”

The Brain Filter

Tuesday, December 1st, 2009

Photo via flickr by PhOtOnQuAnTIQuE

A mechanism that the brain uses to filter out distracting thoughts to focus on a single bit of information has been discovered by researchers at the Kavli Institute for Systems Neuroscience and Centre for the Biology of Memory at the Norwegian University of Science and Technology.

They found that the hippocampus selectively tunes in to different frequencies of gamma waves coming from different brain areas. The lower gamma wave frequencies are used to transmit memories of past experiences, and the higher frequencies are used to convey what is happening where you are right now.

“The classical view has been that signaling inside the brain is hardwired, subject to changes caused by modification of connections between neurons,” says Edvard Moser, Kavli Institute for Systems Neuroscience director. “Our results suggest that the brain is a lot more flexible. Among the thousands of inputs to a given brain cell, the cell can choose to listen to some and ignore the rest and the selection of inputs is changing all the time. We believe that the gamma switch is a general principle of the brain, employed throughout the brain to enhance inter-regional communication.”

via KurzweilAI.net

Learning to See the Invisible

Friday, October 30th, 2009

Photo via flickr by Sweet J (away)

A new study at Max Planck Institute for Brain Research in Germany reveals that our brains can be trained to consciously see stimuli that would normally be invisible.

Lead researcher Caspar Schwiedrzik from the Max Planck Institute for Brain Research in Germany said the brain is an organ that continuously adapts to its environment and can be taught to improve visual perception.

“A question that had not been tackled until now was whether a hallmark of the human brain, namely its ability to produce conscious awareness, is also trainable,” Schwiedrzik said. “Our findings imply that there is no fixed border between things that we perceive and things that we do not perceive — that this border can be shifted.”

The researchers showed subjects with normal vision two shapes, a square and a diamond, one immediately followed by a mask. The subjects were asked to identify the shape they saw. The first shape was invisible to the subjects at the beginning of the tests, but after 5 training sessions, subjects were better able to identify both the square and the diamond.

The ability to train brains to consciously see might help people with blindsight, whose primary visual cortex has been damaged through a stroke or trauma. Blindsight patients cannot consciously see, but on some level their brains process their visual environment. A Harvard Medical School study last year found that one blindsight patient could maneuver down a hallway filled with obstacles, even though the subject could not actually see.

Schwiedrzik said the new research may help blindsight patients gain conscious awareness of what their minds can see, and he suggested that new research should address whether the brains in blindsight patients and people with normal vision process the information the same way.

via KurzweilAI.net and ScienceDaily

In a conversation with Sputnik Observatory, game designer Will Wright explains that we take in more data than our mind is conscious of—that our brain is a filter— and he wonders how do we get through the filter?

If you look at typical human senses and do a rough estimate of the data coming in through them, the visual sense takes in about 100 million bits per second, auditory is about 10 million bits per second, touch is about 100 thousand bits per second, etc, etc. So our total sensory input, at any given time, is over 100 million bits per second. But yet our conscious stream is something like 200 or 300 bits per second. So there’s something like a million-to-one difference between the total amount of data our bodies are taking in and the amount that we’re consciously aware of and thinking of. It seems from that you can kind of gather that probably 95% of our intelligence is a filtering function. How much of that millions of bits of information are we ignoring? And how are we picking out the bits that are relevant for us to put our attention on? I think there’s a remarkable amount of our intelligence that actually is ignoring the world intelligently. And we’re just noticing very small bits of it at any given point in time. So pumping a lot more information into the person, the fact is, most of it is going to get ignored. That’s what our brain is there for. A lot of our brain is there to, basically, filter out most of this data coming into us. I think what’s almost more powerful is how do we get through that filter?

How do we give the player data that they will determine is relevant to whatever they’re doing at the time? Whatever problem they’re solving, whatever experience they’re in. And then once they get that small amount of data, they can decompress it in their imagination into a vast, elaborate world. So I think a lot of what’s important to keep in mind is that, in games, we’re actually running on two processors. There’s the processor in front of you on the desktop, and then there’s the processor in your imagination. And really what we want to do, for the most part, is program the processor in your imagination.

Brain2Brain Communication

Thursday, October 8th, 2009

Photo via flickr by jmsmytaste

New research from the University of Southampton has demonstrated that it is possible for brain-to-brain (B2B) communication through the power of thought— with the help of electrodes, a computer and Internet connection.

This experiment goes a step further from Brain-Computer Interfacing (BCI) that captures brain signals and translates them into commands, says Dr Christopher James from the University’s Institute of Sound and Vibration Research.

It involved one person using BCI to transmit thoughts, translated as a series of binary digits, over the internet to another person whose computer receives the digits and transmits them to the second user’s brain through flashing an LED lamp.

While attached to an EEG amplifier, the first person would generate and transmit a series of binary digits, imagining moving their left arm for zero and their right arm for one. The second person was also attached to an EEG amplifier and their PC would pick up the stream of binary digits and flash an LED lamp at two different frequencies, one for zero and the other one for one. The pattern of the flashing LEDs is too subtle to be picked by the second person, but it is picked up by electrodes measuring the visual cortex of the recipient.

The encoded information is then extracted from the brain activity of the second user and the PC can decipher whether a zero or a one was transmitted. This shows true brain-to-brain activity.

via ScienceDaily