Category: Attention

Gorillas we have missed: Sustained inattentional deafness for dynamic events.

Dalton, P., & Fraenkel, N. (2012). Gorillas we have missed: Sustained inattentional deafness for dynamic events. Cognition124(3), 367-372.

The ability of selective attention is a crucial ability that allows us – and only that way  – to behave effectively in a world full of simultaneous stimuli.

Following the inattentional blindness paradigm, the authors focused on hearing,since it is considered an early warning system, tuned to detect unexpected stimuli. Is it rightly tuned?

In order to replicate the effect in hearing, Dalton and Fraenkel dissecated the inattentional blindness paradigm into three components:

1. A task relevant stimuli

2. A task irrelevant stimuli

3. An unexpected critical stimulus

This last ingredient should be similar to the irrelevant stimuli only in the dimension that differed them both from the relevant stimuli. However, they should differ from each other in other dimensions such as spatial location, speed, trajectory, shape, etc.

Having said this, the intriguing thing of the inattentional deafness effect is that “the similarity between the unexpected critical stimulus and the irrelevant stimuli on the dimension upon which relevant and irrelevant are defined, can prevent the detection of the critical stimulus, despite its salience on a number of other dimensions” (note to self -> is this somewhat molded by expectations? Are we tuned to expect some things based on experience (we are!), and does this speed up processing? Expectations as a tool to process the world faster.)

This doesn’t seem very efficient, because in real world situations, processing new and unexpected stimuli – fire alarms, unexpected movements – is likely to be more important than processing of continually present yet task irrelevant scene elements.

The twist on this experiment was that the authors used binaural sound to provide a realistic audio scene and the critical stimuli was dynamic. When thinking about the dichotic listening task, this set up makes spatial separation harder.

2

So, two men and two women were separately placed in a room, preparing a party. The dummy head was placed between these two tables. A man saying “I am a gorilla” passed near the men.This was the critical stimulus and it lasted for 19s.

In both experiments, the channels were reversed for half of the participants in order to balance for potential orientation effects.

In experiment 1, the gorilla passed near the men. Results showed that 90% of the participants attending to men’s conversation mentioned the gorilla. However, only 30% of the participants attending to the women’s conversation mentions the gorilla.

In experiment 2, the gorilla was presented in “mirror image”, such that it appeared on the other side of the screen, passing near the women. This was somewhat more flagrant than experiment 1 in the sense that the critical stimulus was near the relevant stimulus, and was different at least in the voice tone.

This time, 65% of the participants listening to men mentioned the gorilla, while only 45% listening to women mentioned it.

The results showed relevant evidence for the inattentional deafness effect with dynamic stimulus in 3D audio scenes. This finding can have serious implications in road safety.

Hopefully, more on that later.

 

Visual perceptual load induces inattentional deafness

Macdonald, J. S., & Lavie, N. (2011). Visual perceptual load induces inattentional deafness. Attention, Perception, & Psychophysics73(6), 1780-1789.

This seems to me to be the first paper on the subject, dating from 2011. The idea was to verify whether perceptual capacity is modality specific or not. The method was rather simple and low-tech, which is different from what I have been reading. And refreshing for that matter! Once again, the authors – who first brought attention to the inattentional deafness concept – modified an inattentional blindness paradigm to assess inattentional deafness.

The authors say that focused attention on a task results in reduced perception of irrelevant information. This reduction depends on the level of the perceptual load in the task. Perceptual load corresponds to the amount of information involved in the perceptual processing of the task stimuli. Tasks involving higher perceptual load consume all or most of attentional capacity, leaving little or none remaining for processing any task irrelevant information. In this scenario, the authors ask, would a car horn be noticed when you were attending to a visually loaded billboard?

They wanted to see whether perceptual load in a visual attention task would modulate conscious awareness of task unrellated auditory tones.

For that, a set of three experiments were made, all with a very similar set-up and procedure.

 

 

1

In the low load task, the participant had to signal which arm of the cross was blue.

In the high load task, the participant had to signal which arm was longer. By the end of the eighth trial, a critical audio sign (CS) was played, and by the end of the task the participant was asked if s/he noticed anything different from previous trials.

For experiment 1, white noise was played continuously for 19s during each trial. In the critical and control trials, a 180 Hz pure tone at 28 dB of either 100 or 150ms was presented at the onset of the cross.

The participants were later asked to describe the experiment, even when they have not noticed anything different.

The results demonstrated that error rates were significantly higher in the high load condition than in the low load condition. 7 out of 28 participants reported awareness in the high load condition, whereas 21 out of 28 noticed something different in the low load condition. These results show that, in fact, the tasks differed as for mental load, and what’s more, that the effect of inattentional deafness was indeed verified.

Experiment 2 wanted to assess to what extent could the phenomenon be verifies, so the authors removed the white noise, leaving the critical audio sign unmasked.

Again, both high and low load conditions differed significantly in error rates. In the high load 5 out of 24 noticed the sign, and in the low load, 21 out of 24 noticed it.

This means that high perceptual load in a visual attentional task reduces auditory awareness, thereby producing inattentional deafness, even with an unmasked tone and when people are not actively ignoring sound.

Experiment 3 randomly intermixed the high and low task within a longer block of 143 trials. Also, the low load task was changed to a line length discrimination task with a far greater line length difference than in the high load condition. Like in experiment 2, no white noise was played.

In this experiment, the reaction times were significantly longer in the high load condition, and error rates were also higher in the high load condition. 18 out of 32 reported awareness in the high load condition, and 28 out of 32 reported it in the low load condition. These surprising results got the authors to the conclusion that inattentional deafness was influenced by the level of visual perceptual load in the task rather than by any differences in motivation, vigilance, task engagement, or strategy.

 

All in all, results suggested that the elementary process of noticing the mere presence of a sound depends on an attentional capacity resource that is shared between the modalities of vision and hearing.

The authors claim that it is possible that some processing capacities are modality specific, while others draw on a shared cross-modal resource. For example, previous findings that perception of visual motion is reduced by attention to a high load visual stimulus stream, but not to an auditory word stream, may indicate that the perception of visual motion suffers from visual capacity limits.

Attentional Limitations with Head-up Displays

Mccann, R. S., Foyle, D. C., & Johnston, J. C. (1993). Attentional Limitations with Head-up Displays. Proceedings of the Seventh International Symposium on Aviation Psychology(pp. 70–75).

In 1993 McCann and collaborators wanted to prove this effect was verified when using a HUD on an aviation task. The authors expected to find that visual attention could be focused on either HUD or on the world beyond them, but not on both simultaneously. Their critique went to the “blind” application of HUD to the plane, without considering important human factors from whose perspective there could be some problems with the parallel processing assumption.

According to the authors, there are three perceptual cues of the HUD (bear in mind we are in 1993) that could help to distinguish its symbology from the world:

1) HUDs are stationary;

2) HUDs are generally drawn in highly saturated green;

3) HUDs are oriented vertically with respect to the eye plane.

Since these cues are distinguishable, the visual system is likely to group the information transmitted by the HUD as a perceptual group, and the world as another. This may have a very negative influence on the main task of piloting or driving: objects in the world may not be processed in parallel with HUD symbology, and transitioning between the HUD and the world may be slowed down by the requirement to shift attention between groups.

So their study had two goals: the first goal was to test the hypothesis that the visual system parses HUDs and the world as separate perceptual groups, so that when attention is focused on the HUD (world), objects in the world (HUD) are excluded from processing The second goal was to determine whether transitioning from the HUD to the world (and from the world to the HUD) requires a shift of attention.

The task was a detection task in a low fidelity approach to a runway where participants had to find a target (a stop sign or a diamond sign). Before the potential targets appeared – three geometric signs on the HUD and other three on the runway – the participant was alerted to whether the relevant set of geometric symbols could appear on the HUD or on the runway. Participants were told that if the relevant target was a stop sign, the runway was closed, and they should signal their intention to do a go-around by striking the upper key as quickly as possible. Alternatively, if the relevant target was a diamond, the runway was open, and they should signal their intention to continue the landing by pressing the lower key as rapidly as possible.

The results supported both hypotheses tested: When subjects focused on the HUD for the duration of the trial, there was little effect of conflicting information in the world. Similarly, when subjects focused on the world for the duration of the trial, there was little influence of conflicting information on the HUD. These results add to previous findings that when pilots focus attention on the HUD, objects in the world are excluded from processing. When the targets where located in an incongruent situation, the shift costs in transitioning from one group to the other were as much as 150 msec (for HUD- to-runway transitions).

The authors suggest some design implications for future HUDs. Since HUDs do not seem to eliminate transition times between instrument processing and world processing, future HUDs should be developed with an eye toward removing the cues that cause the visual system to segregate HUDs from the world. For example, suppose that perspective cues and differential motion cues are in large part responsible for the segregation. The problem could be attenuated by designing HUD symbology to be as conformal as possible with the out-the-world scene.

IVIS, If only everything remained the same…

Porter, B.E. (Ed.) Handbook of Traffic Psychology. Elsevier Inc.

While reading the chapter on Ergonomics and Human Factors I found the statement I was looking for. It’s hard to find definite answers of course, but since the first time I heard about In-vehicle information systems (IVIS) I wondered if they really were effective.

Although the safety potential is huge, the ultimate effects are definitely smaller than expected. If these systems are included in the vehicle, they would work perfectly if everything remained the same. But when one thing inside the vehicle is changed, this will inherently change the behavior of the driver: everything is connected.

Some of the possible negative effects of the IVIS – the book mentions ITS, Intelligent Transport Systems – are:

1)      Underload and diminished attention level;

2)      Information overload (Google Glass and all AR things, please do watch out);

3)      Incorrect interpretation of information;

4)      Overreliance on the system;

5)      Risk compensation.

This is definitely interesting, although quite intuitive for anyone with minimal knowledge on cognition.

Do engineers consider this when they conceive these systems? I know nowadays most of them have psychologists or ergonomists on their teams, but some of them still don’t.

And these systems worked rather well on aeronautics contexts, but it seems as if they were blindly applied to the automotive sector.

More on this on the future, definitely.

Selective Looking: Attending to Visually Specified Events

Neisser, U., Becklen, R. (1975). Selective looking: Attending to visually specified events. Cognitive Psychology, 7, 4, 480-494.

I have to say I find these older papers very interesting. It seems they are written attending to some basic rules only, meaning there is space for humour and “easy-going-ness”. They really make an effort to be light and understandable.
A second note goes to all the methodological effort: WOW. I mean…working in a very sci-fi environment makes it hard to remember those days with no digital cameras and no video editing softwares. The authors really had to work hard both to conceive the procedure as to adjust it to each participant.

So, this is the idea.
After the big hype of selective listening, where after listening to two messages simultaneously, people could only pay attention to one of them, the authors decided to apply the same idea to other modality: the sight.
So they videotaped separately two disparate episodes, and then showed them to the participants in complete overlap.

In a short literature review, the authors pose the possibility of being the distance whichmay cause selection. An interesting idea – and that may be useful for the Head-Up display subject which I may study in the future – is that the distance of both objects may be at different optical distances, and thus selection might be ascribed to differential accomodation of the lens of the eye. Because of this, they decided to keep both episodes at the same distance.

The authors’ main hypothesis was that the subjects would easily be able to follow one episode and ignore the other.

Other questions they wanted to see answered were:

– How difficult (or how easy) is it to follow one episode and ignore another when both are presented at the same optical distance in the same binocular vision field?

– Does the substitution of dichtopic for binocular presentation change the difficulty of the task → (won’t be speaking about this variable)

– Will unusual events be noticed if they are not part of the episode being observed?

– Is it possible to follow two independent episodes at the same time if instructed to do so?

Method

The authors videotaped two episodes: The hand game and the ball game.

On the hand game, the authors were playing and, from time to time, they would make synchronization signals (tapping on the blackboard behind). Everytime the participant saw this, he would have to press a switch with the right hand.

On the ball game, three men were playing basketball, moving irregularly. It also had a synch sign, and the participant pressed a switch with the left hand everytime he saw that one.

In both games there were some odd events: sometimes the hand shook, sometimes the ball disappeared, sometimes the men were substitued gradually by three women. These odd events were never mentioned on the intruction phase.

1

This was the set-up, using half-silvered mirrors and two TV’s, disposed either to present the images binocularly (both eyes) or dichopticly (one for each eye).

2

The subjects performed several trials:

Trials 1&2 served as baseline for each episode separately.

Trial 3 two episodes were presented, but the subject was instructed to respond tot the ballgame only.

Trial 4 was like Trial 3, but with the hand game only.

Trials 5&6 the subject has to respond to events on both episodes.

Trials 7 to 10 the subject had to follow one game and ignore the other, again, but using slow episodes (half of the events – 20 instead of 40).

Results

1) In the baseline condition, almost no event was missed.

2) When they had to ignore one of the episodes, the performance slightly decreased.

3) Drastic deteroration of performance when subjects had to monitor both episodes simultaneously (20%-40% of events were missed). Participants declared the task was demaning and impossible.

4) The odd events were rarely noticed.

These results took the authors to question if they were due to peripheral registration, but they immediately put aside that option: eye movements cannot be the principal mechanism of selective attention. In this process, nothing disappears. One event is perceived because the relevant information is being picked up and used; other information is not picked up in the first place, and consequently, not used.

The authors suggest the design of some existing optical systems (in 1975) may lead to eye strain and other problems because the scenes are presented at different distances. It would be interesting if these authors manipulated the distance of each episode as well. I will come back to that, I’m sure it’s already somewhere.

TRIVIA

I leave you with a 1999 classic: http://www.youtube.com/watch?v=vJG698U2Mvo

Funny fact: They were inspired by the work of Kolers (1969,1972) who wore a headgear fitted with a half-silvered mirror so that the world ahead of him and the world behind him were simultaneously given to the binocular field of view. He said he could easily switch between both views and while attending to one, the other disappeared.