We are off for the Labor Day Holiday weekend. Here, some avian advice on the waning season.
The post Off for Labor Day: Don’t Let This Fading Summer Pass You By appeared first on The Scholarly Kitchen.
We are off for the Labor Day Holiday weekend. Here, some avian advice on the waning season.
The post Off for Labor Day: Don’t Let This Fading Summer Pass You By appeared first on The Scholarly Kitchen.
We are off today and tomorrow for the US Independence Day holiday. Also included, a song that hews carefully to archaic rules about prepositions at the end of sentences.
The post Off for the US Holiday — More Grammar appeared first on The Scholarly Kitchen.
Here’s where you can find Scholarly Kitchen Chefs at the SSP Annual Meeting.
The post See You in Portland! SSP Annual Meeting Panels featuring Scholarly Kitchen Chefs (and a Song to Get You There) appeared first on The Scholarly Kitchen.
We’re off for Memorial Day. Please allow Shonen Knife to get your summer started.
The post Out of Office — Summer Begins appeared first on The Scholarly Kitchen.
We are off for the US Labor Day holiday, but wanted to say thanks to all who do the hard work of scholarly communications.
The post Off for Labor Day: Get Behind the Mule appeared first on The Scholarly Kitchen.
We’re off for the US holiday on Monday, so here’s a musical interlude for those heading to the SSP Meeting next week.
The post Off for Memorial Day and Off to Chicago for SSP appeared first on The Scholarly Kitchen.
Music can evoke strong emotions and affect human behaviour. We process music via a series of complex cognitive operations. Consequently, it can be a window to understanding higher brain functions, as well as being used as a diagnostic and therapeutic tool. So how can we understand the way music evokes emotions and effectively use this in healthcare technologies?
Recently PLOS ONE launched a collection on “Affective Computing and Human-Computer Interactions” and we discuss with Stefan Ehrlich from the Technische Universität München and Kat Agres from the National University of Singapore their paper on a music-based brain-computer interface for emotion mediation.
PLOS – In your paper “A closed-loop, music-based brain-computer interface for emotion mediation” you present a Brain-Computer Interface (BCI) pilot study that uses an automatic music generation system to both affect users’ emotional states and allows them to mediate the music via their emotions. What would you say are the key points of your work?
Stefan Ehrlich – Our work focuses on the integration of music with healthcare technology to mediate and reinforce listeners’ emotional states. The key point we see is in providing a novel automatic music generation system that allows a listener to continuously interact with it via an “emotion display”. The system translates the listener’s brain activity, corresponding to a specific emotional state, into a musical representation that seamlessly and continuously adapts to the listener’s current emotional state. Whilst the user listens, they are made aware of their current emotional state by the type of generated music, and the feedback allows them to mediate or to regain control over the emotional state. Many of the neurofeedback applications that have been already proposed often only have one-dimensional feedback provided to the to the subject. For instance, a levitating ball is displayed on the screen, and the subject is asked to control it up or down. The advantage of using music is that it’s possible to map a relatively complex signal, in this case brain activity, in a multi-dimensional manner to a cohesive, seemingly only one- dimensional feedback. It’s possible to embed different information in a single cohesive BCI feedback by using the different features of music, such as rhythm, tempo, the roughness of the rhythm or the harmonic structure.
PLOS – Were there any particular health care applications that you had in mind when designing this pilot study?
Kat Agres – I tend to think of music as being a sort of Swiss army knife where there are lots of features that can come in handy, depending on the scenario or the clinical population. For example, it’s social, it’s engaging, it often evokes personal memories, and it often lends itself to rhythmic entrainment. It’s these properties or features of music that lend itself particularly well to health care applications. Our main focus is on mental health and emotional wellbeing, and teaching people how to control their own emotions. And I think that’s the really interesting part about this study, that the music is a sonification of the listener’s emotional state, as measured via their EEG. It is meant to influence their emotional state, and helps teach the listener how to mediate their emotional states as they interact with the music system. This sonification can show the listener both what’s happening emotionally but it also allows them to mediate the sound of the music by affecting their own emotional state. The music is being created in real time based on the brain activity. We’ve recently been awarded a fairly large grant in Singapore to develop a holistic BCI system that we’re actually calling a Brain-Computer-Brain Interface. The project will cover different aspects, e.g., motor skills, cognition and emotion. We’ve already started developing the 2.0 version of the automatic generation system, and we are about to validate it with a listening study with both healthy adults and depressed patients. Once all these validation steps have been completed and we can effectively say that the system is flexible enough to induce different emotion states in a depressed population, we will be applying this to stroke patients who are battling depression.
PLOS – What do you think the main differences will be in the ability of depressed and healthy populations to affect emotions with this system?
Kat Agres – The number one reason people listen to music is to enhance or modify their emotion state or their mood. There is very significant literature now supporting the use of music for various mental health scenarios and for people who are struggling with various mental health conditions. I think that music is particularly well positioned to help people when other things are not helping them. The first group of depressed patients that we will be testing our system on is made up of many young people who actually think of their identity in part in terms of their music. Based on the literature and unique affordances of music, I think that we have a decent shot at reaching these individuals and helping them figure out how to gain better control of their motion states. In our pilot study, some individuals really got the hang of it and some had a harder time figuring out how to use the system. I think we’ll find the same thing in this population of depressed patients. I’m cautiously optimistic that this system will be effective for this population.
Stefan Ehrlich – When using the system, different psychiatric and neurological populations will probably elicit different patterns of interaction. These will lead to the next steps in understanding how to modify the system in order to better help the patients. At the moment it’s a system that can help them gain awareness of their emotional state and that allows us to measure the variations between the different groups.
Kat Agres –And one of the interesting directions we are exploring with the automatic music generation system is the trajectory of taking someone from a particular (current) emotional state to another, target emotional state. It will be interesting to compare whether the optimal trajectory through emotion space is similar for depressed patients and healthy adults.
PLOS – Was there anything that particularly surprised you?
Stefan Ehrlich – A surprise for me was that without telling the listeners how to gain control over the feedback, when asked, all of them reported that they self-evoked emotions by thinking about happy/sad moments in their life. I want to emphasise that the system triggered people to engage with their memories and with their emotions in order to make the music feedback change. I was surprised that all of the subjects chose this strategy.
PLOS – What was the biggest challenge for you?
Stefan Ehrlich – The most difficult part was developing the music generation system and the mapping with continuous changes of brain activity. In the beginning we wanted to map brain activity features with musical features and the idea of focusing on emotions as the target only came during the development of the system. Constraining the system to emotional features and target variables helped to reduce the dimensionality and the complexity, while clarifying the main objective (emotion mediation) of the eventual system.
Kat Agres – Creating an automatic music generation system is not as easy as it might sound, especially when it has to be flexible to react to changes in brain state in real time. There’s a lot of structure and repetition in music. So when the participants try to push their emotion state up or down the music has to adapt in real time to their brain signals and sound continuous and musically cohesive.
Stefan Ehrlich – Yes, and there can’t be a big time-lag with the generated music, as this would compromise the sense of agency participants have over the system. If the system does not react or respond accordingly, people would lose faith that the system actually responds to their emotions.
PLOS – This work is very interdisciplinary with researchers from many different backgrounds. What are your thoughts on interdisciplinary research?
Stefan Ehrlich – I think it is more fun to work in an interdisciplinary setting. I’m really excited to hear and learn about the insight or the perspective of the other side on a topic or problem. It can be occasionally challenging. You have to establish a common ground, values and methodological approaches to a problem. You need to be able to communicate and exchange in an efficient way so that you can learn from each other. It’s important that all of the involved parties are willing to understand to a certain degree the mindset of the other side.
Kat Agres – I feel quite passionately about interdisciplinary research, especially as a cognitive scientist working at a conservatory of music. One of the obvious things that comes to mind when you’re working with people from different disciplines is how they use different terms, theoretical approaches, or methods. And yes, that can be a difficulty. But as long as everyone is clear on what the big challenges are, have the same high-level perspectives, values, and a shared sense of what the big goals are, it works well. In order to collaborate, you have to get on the same page about what you think is the most important issue, and then you can decide on the methods and how to get there.
PLOS – Considering your original research backgrounds, how did you end up doing such interdisciplinary research?
Stefan Ehrlich – I have a very non-interdisciplinary background in a way (electrical engineering and computer science). During my masters I attended a lecture called “Introduction to computational neuroscience” and it was really an eye opener for me. I realized that my background could contribute to research in neuroscience, engineering, and medicine. From then I started developing a strong interest in research at this intersection of topics.
Kat Agres – I specifically chose an undergrad institution that allowed me to pursue two majors within one degree programme: cognitive psychology and cello performance. I found it really difficult to choose one over the other and eventually I realised that I could study the cognitive science of music. And then I did a PhD in music, psychology, and cognitive science. I consider health to be yet another discipline that I’m interested in incorporating into a lot of my research. I am very grateful that recently I’ve been able to do more research at the intersection of music, technology, and health.
PLOS – In the field of affective computing and human-computer interactions, what do you think are the biggest challenges and opportunities?
Stefan Ehrlich – I think one important aspect is the human in the loop. The human is at the centre of this technology, as important as the system itself. Often the transfer from the lab is very difficult to do due to the variables associated with humans. Ultimately, we want to see people using these technologies in the real world, and this is the main challenge.
Kat Agres – I agree that human data can be messy. Physiological signals, like EEG, galvanic skin response, heart rate variability, etc., are all pretty noisy signals, and so it’s just difficult to work with the data in the first place. We see daily advancements in AI, medical technologies, and eHealth. I think the future is going to be about merging these computational and engineering technologies with the creative arts and music.
PLOS – Do you see Open Science practices, like code and data sharing, as important for these fields?
Stefan Ehrlich – Yes absolutely. When I started working in research there were not many data sets available that would have been useful for my work. I think researchers should upload everything – from data to code – to a public repository. I personally use GitHub, which currently has the limitation of not allowing very large files, e.g., EEG data. It’s not an ideal repository for this kind of data at the moment, but there are many other platforms being developed and will hopefully be adopted in the future.
Kat Agres – I wholeheartedly agree that Open Access is extremely important. I am glad that a discussion is happening around not all researchers having access to funds to make their work Open Access. I’m lucky that I’m attached to an academic institution where one can apply for funds for Open Access. My concerns is that policies requiring authors to pay might create elitism in publication. Academic partnerships with journals like PLOS ONE can help researchers publish Open Access.
PLOS – What would be your take home message for the general public?
Stefan Ehrlich & Kat Agres – I think that the public currently perceives music predominantly as a medium for entertainment, but music has a much bigger footprint in human history than this. Historically, music served many important roles in society, from social cohesion, to mother-infant bonding, to healing. In ancient Greece, Apollo was the god of Music and Medicine. He could heal people by playing his harp. They used to think that music had healing properties. The same is found in Eastern cultures, where for example the Chinese character for medicine is derived from the character for music. There is a very long-standing connection between these areas. In more recent years music has taken this more limited role in our society, but now more and more people are beginning to realise that music serves many functions in society, including for our health and wellbeing. We hope that music interventions and technologies such as our affective BCI system will contribute to this evolving landscape and provide a useful tool to help people improve their mental health and well-being.
References:
1. Ehrlich SK, Agres KR, Guan C, Cheng G (2019) A closed-loop, music-based brain-computer interface for emotion mediation. PLOS ONE 14(3): e0213516. https://doi.org/10.1371/journal.pone.0213516
Author Biographies
Stefan Ehrlich is a postdoctoral fellow in the Dystonia and Speech Motor Control Laboratory at Harvard Medical School and Massachusetts Eye and Ear Infirmary, Boston, USA. His current research is focused on brain-computer interfaces (BCIs) for the treatment of focal dystonia using non-invasive neurofeedback and real-time transcranial neuromodulation. Formerly, he was a postdoctoral researcher at the Chair for Cognitive Systems at the Technical University of Munich, where he also obtained his PhD in electrical engineering and computer science in 2020. His contributions comprise research works on passive brain-computer interfaces (BCI) for augmentation of human-robot interaction as well as contributions to the domain of easy-to-use wearable EEG-based neurotechnology and music-based closed-loop neurofeedback BCIs for affect regulation.
ORCID ID – 0000-0002-3634-6973.
Kat Agres is an Assistant Professor at the Yong Siew Toh Conservatory of Music (YSTCM) at the National University of Singapore (NUS), and has a joint appointment at Yale-NUS College. She was previously the Principal Investigator and founder of the Music Cognition group at the Institute of High Performance Computing, A*STAR. Kat received her PhD in Psychology (with a graduate minor in Cognitive Science) from Cornell University in 2013, and holds a bachelor’s degree in Cognitive Psychology and Cello Performance from Carnegie Mellon University. Her postdoctoral research was conducted at Queen Mary University of London, in the areas of Music Cognition and Computational Creativity. She has received numerous grants to support her research, including Fellowships from the National Institute of Health (NIH) and the National Institute of Mental Health (NIMH) in the US, postdoctoral funding from the European Commission’s Future and Emerging Technologies (FET) program, and grants from various funding agencies in Singapore. Kat’s research explores a wide range of topics, including music technology for healthcare and well-being, music perception and cognition, computational modelling of learning and memory, automatic music generation and computational creativity. She has presented her work in over fifteen countries across four continents, and remains an active cellist in Singapore.
ORCID ID – 0000-0001-7260-2447
The post Music based brain-computer interfaces – an interview with Stefan Ehrlich and Kat Agres appeared first on EveryONE.
Have you ever thought about everything that goes into playing music or speaking two languages? Musicians for example need to listen to themselves and others as they play, use this sensory information to call up learned actions, decide what is important and what isn’t for this specific moment, continuously integrate these decisions into their playing, and sync up with the players around them. Likewise, someone who is bilingual must decide based on context which language to use, and since both languages will be fairly automatic, suppress one while recalling and speaking the other, all while continuously modifying their behavior based on their interactions with another listener/speaker. All of this must happen quickly enough for the conversation or song to flow and sound natural and coherent. It sounds exhausting, yet it all happens in milliseconds!
Playing music or speaking two languages are challenging experiences and complex tasks for our brains. Past research has shown that learning to play music or speak a second language can improve brain function, but it is not known exactly how this happens. Psychology researchers in a recent PLOS ONE article examined how being either a musician or a bilingual changed the way the brain functions. Although we sometimes think of music as a universal language, their results indicate that the two experiences enhance brain function in different ways.
One way to test changes in brain function is by using Event Related Potentials (ERPs). ERPs are electrical signals (brain waves) our brains give off immediately after receiving a stimulus from the outside world. They occur in fairly predictable patterns with slight variations depending on the individual brain. These variations, visualized in the figure above with the darkest red and blue areas showing the most intense electrical signals, can clue researchers into how brain function differs between individuals and groups, in this case musicians and bilinguals.
The ERP experiment performed here consisted of a go/nogo task that is frequently used to study brain activity when it is actively suppressing a specific behavior, also called inhibition. In this study, the authors asked research participants to sit in front of a computer while simple shapes appeared on screen, and they were to press a key when the shape was white—the most common-colored shape in the task—but not when purple, the least frequent color in the task. In other words, they responded to some stimuli (go) and inhibited their response to others (nogo). This is a similar task to playing music or speaking a second language because the brain has to identify relevant external sensory information, call on a set of learned rules about that information, and make a choice about what action to take.
The authors combined and compared correct responses to each stimulus type in control (non-musician, non-bilingual) groups, musician groups, and bilingual groups. The figure above compares the brainwaves of different groups over time using stimulus related brainwave components called N2, P2, and LP. As can be seen above, these peaks and valleys were significantly different between the groups in the nogo instances. The N2 wave is associated with the brain’s initial recognition of the meaning or significance of the stimulus and was strongest in the bilingual group. The P2 on the other hand, is associated with the early stages of putting a stimulus into a meaningful context as it relates to an associated behavior, and was strongest in the musician group. Finally, the authors note a wave called LP wave, which showed a prolonged monitoring response in the bilingual group. The authors believe this may mean bilinguals take more time to make sure their initial reaction is correct.
In other words, given a task that involved identifying a specific target and subsequently responding or not responding based on learned rules, these results suggest that musicians’ brains may be better at quickly assigning context and an appropriate response to information because they have a lot of practice turning visual and auditory stimuli into motor responses. Bilinguals, on the other hand, show a strong activation response to stimuli along with prolonged regulation of competing behaviors, likely because of their experience with suppressing the less relevant language in any given situation. Therefore, despite both musicianship and bilingual experiences improving brain function relative to controls, the aspects of brain function they improve are different. As games and activities for the purpose of “brain training” become popular, the researchers hope this work will help with testing the effectiveness of brain training.
Citation: Moreno S, Wodniecka Z, Tays W, Alain C, Bialystok E (2014) Inhibitory Control in Bilinguals and Musicians: Event Related Potential (ERP) Evidence for Experience-Specific Effects. PLoS ONE 9(4): e94169. doi:10.1371/journal.pone.0094169
Images are Figures 1 and 2 from the article.
The post Music, Language, and the Brain: Are You Experienced? appeared first on EveryONE.
This menace may leap out at you in the subway or find you when you’re tucked away, safe in your bed; it might follow you when you’re driving down the street or running at the gym. Hand sanitizer can’t protect you, and once you’re afflicted, the road to recovery can be a long one. However, this isn’t the Bubonic plague or the common cold—instead, the dreaded earworms!
Derived from the German word ohrwurm, which translates literally to “ear-worm,” an earworm commonly refers to a song, or a snippet of a song, that gets stuck in your head. Earworms can occur spontaneously and play in our heads in a seemingly infinite loop. Think of relentlessly catchy tunes, such as “Who Let the Dogs Out?,” “It’s a Small World,” or any Top 40 staple. An estimated 90% of people fall prey to an earworm at least once a week and most are not bothersome, but some can cause distress or anxiety. And yet, despite the earworm’s ubiquity, very little is known about how we react to this phenomenon. With the assistance of BBC 6 Music, the authors of a recent PLOS ONE study set out to connect the dots between how we feel about and deal with these musical maladies.
Researchers drew upon the results of two existing surveys, each focusing on different aspects of our feelings about earworms. In the first, participants were asked to reflect on whether they felt positively or negatively toward earworms, and whether these feelings affected how they responded to them. The second survey focused on how effective participants felt they were in dealing with songs stuck in their heads. Responses to both surveys were given free form.
To make sense of the variety of data each survey provided, the authors coded participant responses and identified key patterns, or themes. Two researchers developed their own codes and themes, compared notes and developed a list, as represented below.
The figure above represents responses from the first survey, in which participants assigned a negative or positive value to their earworm experiences and described how they engaged with the tune. The majority didn’t enjoy earworms and assigned a negative value to the experience. These responses were clustered by a common theme, which the researchers labelled “Cope,” and were associated with various attempts to get rid of the internal music. A significant number of participants reported using other music to combat their earworms.
Participants in the second survey, which focused on the efficacy of treating earworms, responded in a number of different ways. Those whose way of dealing was effective often fell into one of two themes: “Engage” or “Distract.” Those that engaged with their earworms did so by, for example, replaying the song; those that wanted distraction often utilized other songs. Most opted to engage.
Ultimately, the researchers concluded that our relationships with these musical maladies can be rather complex. Yet, whether you embrace these catchy tunes or try to tune them out, the way we feel about earworms is often connected to how we deal with them.
Want to put in your two cents? You can tell the authors how you deal with earworms at their website, Earwormery. For more on this musical phenomenon, listen to personal anecdotes on Radiolab, read about earworm anatomy at The New Yorker, or dig deeper in the study.
Citation: Williamson VJ, Liikkanen LA, Jakubowski K, Stewart L (2014) Sticky Tunes: How Do People React to Involuntary Musical Imagery? PLoS ONE 9(1): e86170. doi:10.1371/journal.pone.0086170
Images: Record playing by Kenny Louie
Figure 1 from the paper.
The post Infectious Earworms: Dealing with Musical Maladies appeared first on EveryONE.
Do we really sing as well as we all think we do in the shower? Exactly how complex is Mel Taylor’s drumming in Wipeout? How we hear things is important not just for the field of music research, but also for the fields of psychology, neurology, and physics. There is a lot more to how we perceive sound than sound waves just hitting our ears. PLOS ONE recently published two research articles exploring music perception. One article focuses on how perceiving a sound as higher or lower in pitch—the frequency of a musical note relative to other notes—than another sound is influenced by different instruments and the listener’s musical training. The other explores rhythm, including musicians’ perception of rhythmic complexity.
Pitch is the frequency of a sound, commonly described using the words high or low. The quality of tone, or timbre, of an instrument, on the other hand, is less easy to define. Tone quality is often described using words like warm, bright, sharp, and rich, and can cover several frequencies. In the study presented in “The Effect of Instrumental Timbre on Interval Discrimination,” psychology researchers designed an experiment to determine if it is more difficult to perceive differences in musical pitch when played by different instruments. They also tested whether musicians are better at discriminating pitch than non-musicians (you can test yourself with this similar version) to see if musical training changes how people perceive pitch and tone.
The researchers compared the tones of different instruments, using flute, piano, and voice, along with pure tones, or independent frequencies not coming from any instrument. As you can see from the figure above, each instrument has a different frequency range, the pure tone being the most localized or uniformly “colored.” Study participants were given two choices, each choice with two pitches, and decided which set of pitches they thought were the most different from each other; sometimes they compared different instruments or tone qualities and sometimes, the same.
The researchers compared the participants’ answers and found that changes in tone quality influenced which set of pitches participants thought were the most different from each other. Evaluation of the different timbres showed that musicians were the most accurate at defining the pitch interval with pure tones, despite their training in generally instrumental tones. Non-musicians seemed to be the most accurate with both pure and piano tones, though the researchers noted this might be less reliable because non-musicians had a tendency to choose instrumental tones in general. Interestingly, both groups were faster at the pitch discrimination task when pure tones were used and musicians were better at the task than non-musicians. Everyone chose pitch intervals more accurately as the differences between the pitches became larger and more obvious.
Another group of researchers tested how we perceive syncopation, defined as rhythmic complexity, in their research presented in “Syncopation and the Score” by performing an experiment playing different rhythms to musicians. They asked musicians to rank the degree of complexity of each rhythm.
The study was limited, with only ten participants, but in general, the rhythm patterns thought to be the most complex on paper were also perceived as the most complex when the participants listened to them. However, playing the same patterns in a different order sometimes caused listeners to think they were hearing something more or less syncopated. The authors suggest that a rhythm pattern’s perceived complexity depends upon the rhythm patterns played before and after it.
Both research studies highlight the intersection of music and music perception. We don’t need to be musicians to know that music can play tricks on our ears. It may be that some of us are less susceptible than others to these tricks, but even trained musicians can be fooled. Look here for more research on music perception.
Citations:
Zarate JM, Ritson CR, Poeppel D (2013) The Effect of Instrumental Timbre on Interval Discrimination. PLoS ONE 8(9): e75410. doi:10.1371/journal.pone.0075410
Song C, Simpson AJR, Harte CA, Pearce MT, Sandler MB (2013) Syncopation and the Score. PLoS ONE 8(9): e74692. doi:10.1371/journal.pone.0074692
Image: Spectrograms of four tones – Figure 1A from Zarate JM, Ritson CR, Poeppel D (2013) The Effect of Instrumental Timbre on Interval Discrimination. PLoS ONE 8(9): e75410. doi:10.1371/journal.pone.0075410
Music may be the newest addition to a science communicator’s toolbox. A PLOS ONE paper published today describes an algorithm that represents terabytes of microbial and environmental data in tunes that sound remarkably like modern jazz.
“Microbial bebop”, as the authors describe it, is created using five years’ worth of consecutive measurements of ocean microbial life and environmental factors like temperature, dissolved salts and chlorophyll concentrations. These diverse, extensive data are only a subset of what scientists have been recording at the Western Channel Observatory since 1903.
As first author Larsen explained to the Wired blogs, “It’s my job to take complex data sets and find ways to represent that data in a way that makes the patterns accessible to human observations. There’s no way to look at 10,000 rows and hundreds of columns and intuit what’s going on.”
Each of the four compositions in the paper is derived from the same set of data, but highlights different relationships between the environmental conditions of the ocean and the microbes that live in these waters.
“There are certain parameters like sunlight, temperature or the concentration of phosphorus in the water that give a kind of structure to the data and determine the microbial populations. This structure provides us with an intuitive way to use music to describe a wide range of natural phenomena,” explains Larsen in an Argonne National Laboratories article.
Speaking to Living on Earth, Larsen describes how their music highlights the relationship between different kinds of data. “In most of the pieces that we have posted, the melody is derived from a numerical measurement, such that the lowest measure is the lowest note and the highest measure is the highest note. The other component is the chords. And the chords map to a different component of the data.”
As a result, the music generated from microbial abundance data played to chords generated from phosphorus concentration data will sound quite different from the same microbial data played to chords derived from temperature data.
“Songs themselves probably are never going to actively replace, you know, the bar graph for data analysis, but I think that this kind of translation of complex data into a very accessible format is an opportunity to lead people who probably aren’t highly aware of the importance of microbial ecology in the ocean, and give them a very appealing entry into this kind of data”, explained Larsen in the same interview with Living on Earth.
Though their primary intent was to create novel way to symbolize the interactions of microbes in the ocean, the study also suggests that microbial bebop may eventually have applications in crowd-sourcing solutions to complex environmental issues.
For further reading, a PLOS ONE paper in 2010 demonstrated that the metaphors used to explain a problem could have a powerful impact on people’s thoughts and decisions when designing solutions. Could re-phrasing complex environmental data in music lead to solutions we haven’t heard yet? As you ponder the question, listen to some microbial bebop!
Other media sources that also covered this research include LiveScience, gizmag and the PLOS blog Tooth and Claw
Citations: Larsen P, Gilbert J (2013) Microbial Bebop: Creating Music from Complex Dynamics in Microbial Ecology. PLoS ONE 8(3): e58119. doi:10.1371/journal.pone.0058119
Thibodeau PH, Boroditsky L (2011) Metaphors We Think With: The Role of Metaphor in Reasoning. PLoS ONE 6(2): e16782. doi:10.1371/journal.pone.0016782
Image: sheet music by jamuraa on Flickr