Category: HUD

Attentional Limitations with Head-up Displays

Mccann, R. S., Foyle, D. C., & Johnston, J. C. (1993). Attentional Limitations with Head-up Displays. Proceedings of the Seventh International Symposium on Aviation Psychology(pp. 70–75).

In 1993 McCann and collaborators wanted to prove this effect was verified when using a HUD on an aviation task. The authors expected to find that visual attention could be focused on either HUD or on the world beyond them, but not on both simultaneously. Their critique went to the “blind” application of HUD to the plane, without considering important human factors from whose perspective there could be some problems with the parallel processing assumption.

According to the authors, there are three perceptual cues of the HUD (bear in mind we are in 1993) that could help to distinguish its symbology from the world:

1) HUDs are stationary;

2) HUDs are generally drawn in highly saturated green;

3) HUDs are oriented vertically with respect to the eye plane.

Since these cues are distinguishable, the visual system is likely to group the information transmitted by the HUD as a perceptual group, and the world as another. This may have a very negative influence on the main task of piloting or driving: objects in the world may not be processed in parallel with HUD symbology, and transitioning between the HUD and the world may be slowed down by the requirement to shift attention between groups.

So their study had two goals: the first goal was to test the hypothesis that the visual system parses HUDs and the world as separate perceptual groups, so that when attention is focused on the HUD (world), objects in the world (HUD) are excluded from processing The second goal was to determine whether transitioning from the HUD to the world (and from the world to the HUD) requires a shift of attention.

The task was a detection task in a low fidelity approach to a runway where participants had to find a target (a stop sign or a diamond sign). Before the potential targets appeared – three geometric signs on the HUD and other three on the runway – the participant was alerted to whether the relevant set of geometric symbols could appear on the HUD or on the runway. Participants were told that if the relevant target was a stop sign, the runway was closed, and they should signal their intention to do a go-around by striking the upper key as quickly as possible. Alternatively, if the relevant target was a diamond, the runway was open, and they should signal their intention to continue the landing by pressing the lower key as rapidly as possible.

The results supported both hypotheses tested: When subjects focused on the HUD for the duration of the trial, there was little effect of conflicting information in the world. Similarly, when subjects focused on the world for the duration of the trial, there was little influence of conflicting information on the HUD. These results add to previous findings that when pilots focus attention on the HUD, objects in the world are excluded from processing. When the targets where located in an incongruent situation, the shift costs in transitioning from one group to the other were as much as 150 msec (for HUD- to-runway transitions).

The authors suggest some design implications for future HUDs. Since HUDs do not seem to eliminate transition times between instrument processing and world processing, future HUDs should be developed with an eye toward removing the cues that cause the visual system to segregate HUDs from the world. For example, suppose that perspective cues and differential motion cues are in large part responsible for the segregation. The problem could be attenuated by designing HUD symbology to be as conformal as possible with the out-the-world scene.

The effect of viewing a car head-up display on ocular accommodation and response times

Wolffsohn, J.S., Edgar, G.K., & McBrien, N.A. (1998). The effect of viewing a car head up display on ocular accommodation and response times. In A.G. Gale, I.D. Brown, C.M. Haselgrave, & S.P. Taylor, Vision in Vehicles – VI (pp.143-151)

This paper was at a Vision in Vehicles VI Proceedings and it’s from 1998. It reminded me of this one in the sense that both present two overlayed stimuli to attend.

The authors manipulated the distance at which they collimated the  HUD in order to verify differences in accommodation and in reaction time to detect changes in the environment. They were constantly measuring the accomodation of the eyes whereas the drivers were attenting the environment only or the HUD+ environment.

The question under this research is: Is there an optimal distance at which an HUD should be focused? A driver, unlike a pilot, focuses on stimuli at very different distances, varying from 1m to infinity.

Results showed that although there was a trend to higher accommodative levels when the HUD was focused closer to the subject, there was not a significant difference in accommodative level over the range of distances at which the virtual HUD image was focused.

The response times of subjects for detecting changes in the external environment (traffic lights) was consistently faster than for detecting changes in the HUD indicators under all conditions.

However, subject response times to detecting changes in the traffic lights and HUD indicators did not significantly vary with the distance at which the virtual HUD image was focused.

When performing the HUD assisted driving task, there was a significant increase in the percentage of traffic light changes missed and HUD indicator changes missed over all distances tested.

So these results are very important, as they suggest that, with a HUD, vital danger signs, such as warning boards, are more likely to be missed.

A HUD

Just in case you have no idea what I’m talking about when I mention Head-Up Displays in vehicles, here’s an example: