My main research focus are the audio warning signals used in healthcare. Being a big fan of Monty Pyhton, I was surprised when I heard they had a satire on health equipment, particularly about the machine that goes ping!
This is a great introduction to the subject!
Pätynen, J., & Lokki, T. (2016). Concert halls with strong and lateral sound increase the emotional impact of orchestra music. The Journal of the Acoustical Society of America, 139(3), 1214-1224.
After several studies and hypothesis, the following experiments were made using skin conductance as an objective measure of arousal or emotional impact. This is interesting to correlate with the previous findings – and something i might be able to do soon.
For the listening tests 28 subjects were chosen. They were either music consumers or music professionals.
In the first experiment, they’ve listened to stimuli the following way:
Pilot signal + 15s Silence + 12 Stimuli (each with 15s Silence)
In the second experiment, participants made paired comparisons between two stimuli, and they had to choose the one that produced a higher overall impact on “you”. Impact was described as thrilling, intense, impressing or positively striking. Again, participants could jump seamlessly between stimuli to make the comparison.
Connecting the results with he plants from the rated concert-halls, it was possible to define the following conclusions:
– Halls with rectangular typology have a more impressive sound (because more sounds reverberate from the lateral directions);
– Positions closer to the orchestra were found to elicit stronger emotional responses.
The methodological interest I bring fromt these three studies is the possibility to seamlessly navigate trhough the stimuli in order to make a rating. this, nevertheless, makes a very specific rating, only to that sample.
Lokki, T., Vertanen, H., Kuusinen, A., Pätynen, J., & Tervo, S. (2010, August). Auditorium acoustics assessment with sensory evaluation methods. In Proc. ISRA (pp. 29-31).
The previous study was made using this graphic user interface, where assessors could seamlessly switch between audio clips (just like between wine sips). The continuous scale ranged from 0 to 120.
The assessors were recruited via an online questionnaire with three parts: a) a pure tone audiometric test; b) a test for vocabulary skills, c) triangle test for the discriminative skills of audio stimuli (FromWikipedia: The assessors are presented with three products, two of which are identical and the other one different. The assessors are asked to state which product they believe is the odd one out)
20 assessors were selected, all with music background, and each made four sessions in total. In the first two sessions they made the attribute elicitation and in the last sessions they’ve used the attributes and scales.
As for the analysis, the classification of the attributes could be made manually, but it was made with AHC – Agglomerative hierarchical clustering. Than, further analysis were made using Multiple Factor Analysis (MFA), which has a PCA as basis. The results are presented here.
Lokki, T. (2014). Tasting music like wine: Sensory evaluation of concert halls. Physics Today, 67(1), 27.
A few months ago, I read this article called “Tasting Music like Wine: Sensory evaluation of concert halls” by Tapio Lokki and was fascinated by two things:
– The lightness of the article and how it introduced such a complex topic as concert-hall acoustics with an anecdotal situation;
– The methodological intrincacy with all the 3D sounds recorded in such an enginious way (all orchestra musicians were recorded solo, placed 24 columns in a stage, each column playing only one instrument, and recorded the full “orchestra” in several places of the venue. Very simply put.)
– After all, there were three interesting things: I loved the use of Wine tasting know-how for the evaluation of the subjective experience of concert-halls.
So the situation is the author and his wife are listening to a concert while drinking some wine. While the wife enjoys the concert but not so much the wine, the author felt totally the opposite. Both perceived wine and music differently. After some thought, the author concluded that both wine and music have a lot in common, because each can be characterized by a multidimensional array of perceptual attributes.
Both are a matter of personal taste, and each person may concentrate on different aspects of the taste or sound. The thing is, winemakers have a solution for this, and have since long developed techniques to determine what makes good or bad wine.
Like the aroma wheel.
The first question than is: could these methods be tailored for the perceptual evaluation of concert-halls?
The wine tasting methods like sensory profiling demand comparison of samples, that is, imagine you have a table with a line of glasses, all with wines different from each other, and you may and must drink a sip from one and the other as many times as you, as an assessor, find necessary. Could this be made with sound?
The answer is yes, and please read the original article to find out how.
In winetasting two methods are used to gather attributes of wines: consensus vocalubary profiling, when a number of assessors reach a number of consensual adjectives for each wine; and individual vocabulary profiling – the one used in this work – where a number of assessors (usually 15 or more) salient which charactersitics can be found in the wine.
The first experiment had 20 listeners, and all heard 3 recording positions out of 3 Finnish concert-hals. Together, they’ve suggested 102 attributes. After clustering the data, one cluster (overall volume and perceived distance) managed to explain more than 50% of the variance.
The second experiment had only one distance – 12m from the stage -, 9 halls and 17 assessors. They’ve suggested 60 attributes clustered in 7 groups.
After more analysis (hierarchical multiple-factor analysis), it was possible to distinguish two groups out of this last evaluation (after ordering by preference also): one group preferred intimate sound in which they could easily distinguish individual instruments and lines, and another group which preferred louder and more reverberant sound with good envelopment and strong bass.
Very impressive how it was possible to understand this information. Would portuguese listeners make the same evaluation?
Why, for instance, should we ascribe sadness to a particular piece of music? “There’s nothing intrinsically sad about this music, so how do we extract sadness from that?” She uses four parameters: speed, intensity, regularity, and extent—whether something is small or large, soft or loud. Angry speech might be rapid, loud, rough and broken. So might an angry piece of music. Someone who’s walking at a moderate pace using regular strides and not stomping around might be seen as content, whereas a person slowly shuffling, with small steps and an irregular stride, might be displaying that they’re sad. Lim’s hypothesis, as yet untested, is that mothers convey emotion to their babies through those qualities of speed, intensity, regularity, and extent in their speech and facial expressions—so humans learn to think of them as markers of emotion.
Angelica Lim, “How long until a robot cries?”
Issue 1, Nautilus
OK, another cool link (can you tell I’ve spend the afternoon going through my “saved for later” feeds?)
It is called Phonambient and its run by a multidisciplinary team, and here’s the description from their “About” page: Phonambient is a project for the documentation and transformation of the contemporary sound heritage. It intends to register and preserve a digital database with sounds that characterize a given city or region, including soundscapes, local sounds, musical extracts, phonetics and phonoloy. The archive will be made freely available for consultation and use in creative and scientific contexts (…).
If you’re curious, here’s what my city sounds like. Braga, Portugal (cannot link to the exact place).
I’ve bumped into this really interesting project that sonifies data.
It is called Data Driven DJ and every month the author launches a video pairing algorithmically generated tunes with visualization, based in real data.
Heres is an example: