A colleague shared with me the most interesting video from 1963, starring Paul Fits, presenting what was then called Human Engineering or Engineering Psychology. It’s an excellent explanation of the Human Factors’ work and goals:
“Engineering psychologists attempt to eliminate (such) confusion not by changing man’s habits, but by changing the machine. They attempt to discover how machines can be designed so that the machine will speak a language which man can understand.”
Pirhonen, A., & Tuuri, K. (2010). Communicative functions of sounds which we call alarms. In Proceedings of the 16th International Conference on Auditory Display ICAD 2010 (pp. 279-286).
This paper caught my attention because it provides qualitative guidelines that can shape a good alarm sound. The methodology used to gather the information is also very creative, and it’s good to see there’s a world beyond frequent frequencies, loudness and intervals. The setting is an anaesthesia workstation.
The method used was the RUS – Rich Use Scenario. This method focuses on the experiences of the user as a person, and so it uses lively stories with a rich imagery of how a product or application is used under an envrionment familiar to the listener or reader – they shoul feel identified with the story. In brief, the method intends to answer: How would the character experience this or that idea?
In this case RUS was applied as a radio play. According to the authors, this means of presentation is suited for the brainstorming of sounds. The procedure was the following:
1. The manuscript for the radio-play was prepared in cooperation with the experts in the context (usability experts of the manufacturer).
2. The radio play was implemented. In the radio play, the sounds-to-be-designed appear as points of ”missing” sound effects, allowing them to be imagined.
3. Two design panel sessions were organised. The participants were six students of different subjects. However, none of the participants had medical science as a major subject, i.e. the participants were amateurs in terms of the context. In the sessions, the participants planned and implemented appropriate sound effects at the given points of the radio-play (which were sounds from the anaesthesia workstation).
4. On the basis of the work of the non-expert design panels, draft sounds were implemented and embedded in the radio- play.
5. Two expert panel sessions, each made up of two anaesthesia nurses and one doctor, were conducted. In the sessions, the final radio-play, including the sound effects, was listened to and discussed.
6. A post-questionnaire was sent to all expert panel participants.
7. The discussions of the expert panels were transcribed and analysed.
The analysis was made to four different sounds for different conditions representing different levels of warnings in an anaesthesia workstation. It consisted in transcribing the participant’s opinions on the proposed sounds, and how could they be improved. Here’s an example for a “Medium Priority Alarm”:
The draft sound was produced with a metallophone. It consists of series of two damped hits at approximately 1 sec. intervals (D# tone, medium register). The events causing the alarm condition in the scenario were:
• Blood pressure has exceeded the alarm level (patient based alarm)
• The entropy meter is badly connected (device based alarm).
General observations concerning design principles:
• In expert panel 1, the events of the scenario were found different in priority:
. . . but I think that if blood pressure has really been too high, it is quite different and requires different reaction than a badly connected entropy sensor – if it has not been pushed in tightly enough thus losing contact.
• Expert panel 2 wished medium-level alarms to be merely informing rather than alarming:
. . . It has to be noted, that ‘aha’, but not anything more severe, let’s sign for it in a few minutes. But if you are busy with other, important tasks and that is tapping away all the time in the background, it would rile.
. . . it obviously depends on the scale – what is classified as important.
Opinions about the draft sound:
. . . Perhaps a bit too feisty. . . kind of loose. . .
. . . should not be that dense. . . . . . I don’t like that metallic tone, it’s irritating. . . . . . [should be] somehow softer. . .
. . .Were there two taps? Perhaps rather. . . well it de- pends on the qualities of the sound but perhaps one of that kind would be good.
. . . since there were two of them [taps], it made it kind of commanding, like ‘hey, . . . !!’
Features of the sound:
• Medium-level alarm should not be too loud, obtrusive nor frequent.
. . . perhaps high priority alarm should be something like this (tapping continuously) to grab attention, but these kind of sounds in which no immediate reaction is necessary, perhaps simple [“bø:b] would be adequate.
• On the other hand, it should be snappy and adequately startling.
• Soft, non-metallic timbre would be desirable.
• Single-tone structure (instead of two tones) and longer pause between repetitions (at 10-15 sec intervals) was proposed.
The comments indicated that the challenge is to design low and medium priority alarms, for most of the times they startle or irritate, and participants stated they should be softer, less metalic. The main conclusions:
High priority alarms
• Sounds can clearly be even more alarming – in terms of perceived urgency – than the sounds which follow the standard.
• Even if it can be assumed that there is an immediate reaction to the alarm, continuous alarm signals are not recommended.
• Melodic structures do not seem to be perceived as alarming; possibly quite the contrary.
• Percussive sounds appear to be favoured, at least by our panels, i.e., the underlying mental model or action model is to warn by beating, stamping, knocking etc.
Medium priority alarms
• The sound needs to get attention, but excessive commanding or sharp quality should be avoided.
• One single soft sound object, repeating at about 10 second intervals, could be adequate. The alarm sound ISO standard provides an appropriate guideline for the frequency of repetitions.
• Sound objects could be constructed in terms of the burst definition in the ISO standard. The strictly defined structure of burst should be broken up, though. Sound objects do not need to be mechanical, beeping pulses either.
• Attention should be paid to the timbre and internal dynamics of sounds. Crucial factors in the perceived softness are onset (attack) and offset (release) phases of a sound object and the legato between separate objects.
• Percussive sounds did notwork in the panels. The related action model should be softer than beating, e.g. arched swing, circular movement or waves.
• Even though it is a question of an alarm, the dominating communicative function should not be commanding or
alarming, but something more subtle.
Low priority alarms
• The sound should be noted, but all commanding or sharp qualities should be avoided.
• Close resemblance to a medium-priority alarm, but more subtle, soft, ”round”, simpler in structure and more
• Alarming qualities – in the traditional manner – should not be included at all.
• One single, very subtle sound object, repeating at 15-30 second intervals. The standard is a good basis for defining the interval.
• A burst consisting of two melodic sounds, suggested by the standard, appears too obtrusive for low priority alarms.
• Communicative intention should be to guide attention or inform with subtle, pleasant means. A commanding or alarming quality is not at all appropriate.
This post may seem out of context but it may be useful for someone out there. I will be speaking about Card Sorting analysis.
The technique is well described in a lot of websites and books, but it mainly consists in a low-fi usability method used to organize information. Do you want to organize a menu for a website? Write the names of all pages in some cards, ask some people to organize them according to some criteria, or according to predefined categories, an finally ask them to give a name to the created categories. You’ll end up with different proposals of organization for the website menu. And then?
These are just some ideas for the data analysis, if someone finds it useful I could develop some a bit more.
(Before I forget, SynCaps software is a really really cool for card sorting analysis).
1. Consensus Analysis
To begin with, you can see if your participants differ in someway. Imagine you gather participants from different backgrounds, their mental models can be different, and that can distort your final analysis.
This type of analysis is used to see the degree of agreement between participants, and it uses the participants matrix. The output is the percentage of consensus between participants, after the calculation of the number of cards that would have to be moved to have Participant A having the same organization as Participant B. The higher this percentage, the more the consensus among these two participants.
2. Item X Item
This is the classic analysis where the number of times an item is paired with another is organized in a matrix.
3. Item X Group
Same as the previous analysis, except it represents the strength with which each item was associated with a group.
4. Cue Validity
It could be interesting if we could know the “findability” of an item. That is, if we were looking specifically for it, would we search it in the right place? Some authors suggest that this findability is correlated with the category validity – the frequency of an item within its category and its proportional frequency in that category compared to all other categories. The cue validity index was created to evaluate this findability. It is the frequency with which an item is associated with the category in question, divided by the total frequency os that item over all categories. The Item X Item matrix is used for this, and when the index reaches 1.0, it means it was never grouped in any of the other categories by all participants.
5. Contextual Navigation
When you have items with low cue validity/ findability, one solution could be Contextual Navigation. This strategy creates links to a particular page, like the “see also” feature often seen in e-commerce websites. One common example is Amazon’s “Customers Who Bought This Item Also Bought”.
This is done by extracting the row of the item X item matrix of that item with low cue validity. After, all elements should be ordered by the number of times they have been classified together with the target item. The frequencies belonging to the same category are removed, and then a number of selected items is chosen by the designer (eg. the three more similar).
I believe these analysis will give you already useful hints on the number of categories you should create, if some attention should be paid to differences in groups of participants and how/where to place the difficult items.