Yesterday I saw this 2016 video where two sound designers react and explain some of the world’s most recognizable sounds. Exactly a day before I had shown some of these sounds to some product design students and it is quite amazing how we are all so tuned into these auditory interfaces.
At some point on the vídeo, about the Marimba iPhone ring, they mention some research made in Bell Labs about the best parameters for a ring tone.
Looking for a bit more info, I found this article about this ringtone, and it says:
“Scientists at Bell Laboratories, Human Factors Research Lab performed numerous studies on ringers, from buzzers to thumpers. They studied tonal quality and duration along with the decibel levels needed for the brain to recognized the call alert. They even tested the Grandpa to the iPhone “old phone” ringtone. In 1956, 300 research subjects in Crystal Lake, Illinois found the “musical tone ringer” to be “pleasant,” but took most test subjects over a week or so to get accustomed to it. However, when pressed, a majority of test subjects wanted the old bell ringer back. Not much has changed since from the days of the early Human Factors research, the brain still works the same, but the technology obviously allows for more finite control of the sounds a ringtone creates.
Ideally, a ring tone should register very clearly and distinctively in the audio range that is central to human hearing, from about 2 to 4 KHz, with a Dynamic range (quietest to loudest) of about 96 dB. Even though this audio range is quite crowded with a lot of sound, it is also precisely where most spoken languages carry a majority of phoneme distinction, and thus, we have evolved a relatively high level of sound discrimination central to this audio range.
For a ringtone to be decoded ideally by the brain, the timbre of the audio envelope ideally should pulse to a full dynamic range to nearly no sound with-in a 3 – 5 second cycle (Bell Labs Research). The relative amplitudes of the various harmonics primarily determine the timbre of instruments and sounds, though onset transients, formants, noises, and inharmonicities also play a role.“
I don’t know much about music. Never had serious music formation, I have real trouble understanding some things. This means I never seriously reflected on many things I take for granted, like music notation.
I am reading “How music works” by David Byrne for several months now. It is not super engaging, but I am enjoying it immensly as I am learning a lot about music.
He speaks about the music notation we use in the west and how much room for interpretation does it leave. I knew there were different notations, but I don’t really know what are the pros and cons of each – still don’t.
He spoke about Iannis Xenakis and how he wrote his pieces and I was really impressed. He made all these lines and connections, and how each should behave in relation to another, and took so much space to do so! I am pro conventions, but to see someone doing things unconventionally is so refreshing! I loved this.
Yesterday one of my favourite podcasts 99% Invisible (99pi.org) made a short story about my research topic, Alarm Design, with professor Judy Edworthy. Very cool! You can listen to it here, start on 11 minutes.
My main research focus are the audio warning signals used in healthcare. Being a big fan of Monty Pyhton, I was surprised when I heard they had a satire on health equipment, particularly about the machine that goes ping!
This is a great introduction to the subject!
Pätynen, J., & Lokki, T. (2016). Concert halls with strong and lateral sound increase the emotional impact of orchestra music. The Journal of the Acoustical Society of America, 139(3), 1214-1224.
After several studies and hypothesis, the following experiments were made using skin conductance as an objective measure of arousal or emotional impact. This is interesting to correlate with the previous findings – and something i might be able to do soon.
For the listening tests 28 subjects were chosen. They were either music consumers or music professionals.
In the first experiment, they’ve listened to stimuli the following way:
Pilot signal + 15s Silence + 12 Stimuli (each with 15s Silence)
In the second experiment, participants made paired comparisons between two stimuli, and they had to choose the one that produced a higher overall impact on “you”. Impact was described as thrilling, intense, impressing or positively striking. Again, participants could jump seamlessly between stimuli to make the comparison.
Connecting the results with he plants from the rated concert-halls, it was possible to define the following conclusions:
– Halls with rectangular typology have a more impressive sound (because more sounds reverberate from the lateral directions);
– Positions closer to the orchestra were found to elicit stronger emotional responses.
The methodological interest I bring fromt these three studies is the possibility to seamlessly navigate trhough the stimuli in order to make a rating. this, nevertheless, makes a very specific rating, only to that sample.
Lokki, T., Vertanen, H., Kuusinen, A., Pätynen, J., & Tervo, S. (2010, August). Auditorium acoustics assessment with sensory evaluation methods. In Proc. ISRA (pp. 29-31).
The previous study was made using this graphic user interface, where assessors could seamlessly switch between audio clips (just like between wine sips). The continuous scale ranged from 0 to 120.
The assessors were recruited via an online questionnaire with three parts: a) a pure tone audiometric test; b) a test for vocabulary skills, c) triangle test for the discriminative skills of audio stimuli (FromWikipedia: The assessors are presented with three products, two of which are identical and the other one different. The assessors are asked to state which product they believe is the odd one out)
20 assessors were selected, all with music background, and each made four sessions in total. In the first two sessions they made the attribute elicitation and in the last sessions they’ve used the attributes and scales.
As for the analysis, the classification of the attributes could be made manually, but it was made with AHC – Agglomerative hierarchical clustering. Than, further analysis were made using Multiple Factor Analysis (MFA), which has a PCA as basis. The results are presented here.
Lokki, T. (2014). Tasting music like wine: Sensory evaluation of concert halls. Physics Today, 67(1), 27.
A few months ago, I read this article called “Tasting Music like Wine: Sensory evaluation of concert halls” by Tapio Lokki and was fascinated by two things:
– The lightness of the article and how it introduced such a complex topic as concert-hall acoustics with an anecdotal situation;
– The methodological intrincacy with all the 3D sounds recorded in such an enginious way (all orchestra musicians were recorded solo, placed 24 columns in a stage, each column playing only one instrument, and recorded the full “orchestra” in several places of the venue. Very simply put.)
– After all, there were three interesting things: I loved the use of Wine tasting know-how for the evaluation of the subjective experience of concert-halls.
So the situation is the author and his wife are listening to a concert while drinking some wine. While the wife enjoys the concert but not so much the wine, the author felt totally the opposite. Both perceived wine and music differently. After some thought, the author concluded that both wine and music have a lot in common, because each can be characterized by a multidimensional array of perceptual attributes.
Both are a matter of personal taste, and each person may concentrate on different aspects of the taste or sound. The thing is, winemakers have a solution for this, and have since long developed techniques to determine what makes good or bad wine.
Like the aroma wheel.
The first question than is: could these methods be tailored for the perceptual evaluation of concert-halls?
The wine tasting methods like sensory profiling demand comparison of samples, that is, imagine you have a table with a line of glasses, all with wines different from each other, and you may and must drink a sip from one and the other as many times as you, as an assessor, find necessary. Could this be made with sound?
The answer is yes, and please read the original article to find out how.
In winetasting two methods are used to gather attributes of wines: consensus vocalubary profiling, when a number of assessors reach a number of consensual adjectives for each wine; and individual vocabulary profiling – the one used in this work – where a number of assessors (usually 15 or more) salient which charactersitics can be found in the wine.
The first experiment had 20 listeners, and all heard 3 recording positions out of 3 Finnish concert-hals. Together, they’ve suggested 102 attributes. After clustering the data, one cluster (overall volume and perceived distance) managed to explain more than 50% of the variance.
The second experiment had only one distance – 12m from the stage -, 9 halls and 17 assessors. They’ve suggested 60 attributes clustered in 7 groups.
After more analysis (hierarchical multiple-factor analysis), it was possible to distinguish two groups out of this last evaluation (after ordering by preference also): one group preferred intimate sound in which they could easily distinguish individual instruments and lines, and another group which preferred louder and more reverberant sound with good envelopment and strong bass.
Very impressive how it was possible to understand this information. Would portuguese listeners make the same evaluation?