Have you ever thought about everything that goes into playing music or speaking two languages? Musicians for example need to listen to themselves and others as they play, use this sensory information to call up learned actions, decide what is important and what isn’t for this specific moment, continuously integrate these decisions into their playing, and sync up with the players around them. Likewise, someone who is bilingual must decide based on context which language to use, and since both languages will be fairly automatic, suppress one while recalling and speaking the other, all while continuously modifying their behavior based on their interactions with another listener/speaker. All of this must happen quickly enough for the conversation or song to flow and sound natural and coherent. It sounds exhausting, yet it all happens in milliseconds!
Playing music or speaking two languages are challenging experiences and complex tasks for our brains. Past research has shown that learning to play music or speak a second language can improve brain function, but it is not known exactly how this happens. Psychology researchers in a recent PLOS ONE article examined how being either a musician or a bilingual changed the way the brain functions. Although we sometimes think of music as a universal language, their results indicate that the two experiences enhance brain function in different ways.
One way to test changes in brain function is by using Event Related Potentials (ERPs). ERPs are electrical signals (brain waves) our brains give off immediately after receiving a stimulus from the outside world. They occur in fairly predictable patterns with slight variations depending on the individual brain. These variations, visualized in the figure above with the darkest red and blue areas showing the most intense electrical signals, can clue researchers into how brain function differs between individuals and groups, in this case musicians and bilinguals.
The ERP experiment performed here consisted of a go/nogo task that is frequently used to study brain activity when it is actively suppressing a specific behavior, also called inhibition. In this study, the authors asked research participants to sit in front of a computer while simple shapes appeared on screen, and they were to press a key when the shape was white—the most common-colored shape in the task—but not when purple, the least frequent color in the task. In other words, they responded to some stimuli (go) and inhibited their response to others (nogo). This is a similar task to playing music or speaking a second language because the brain has to identify relevant external sensory information, call on a set of learned rules about that information, and make a choice about what action to take.
The authors combined and compared correct responses to each stimulus type in control (non-musician, non-bilingual) groups, musician groups, and bilingual groups. The figure above compares the brainwaves of different groups over time using stimulus related brainwave components called N2, P2, and LP. As can be seen above, these peaks and valleys were significantly different between the groups in the nogo instances. The N2 wave is associated with the brain’s initial recognition of the meaning or significance of the stimulus and was strongest in the bilingual group. The P2 on the other hand, is associated with the early stages of putting a stimulus into a meaningful context as it relates to an associated behavior, and was strongest in the musician group. Finally, the authors note a wave called LP wave, which showed a prolonged monitoring response in the bilingual group. The authors believe this may mean bilinguals take more time to make sure their initial reaction is correct.
In other words, given a task that involved identifying a specific target and subsequently responding or not responding based on learned rules, these results suggest that musicians’ brains may be better at quickly assigning context and an appropriate response to information because they have a lot of practice turning visual and auditory stimuli into motor responses. Bilinguals, on the other hand, show a strong activation response to stimuli along with prolonged regulation of competing behaviors, likely because of their experience with suppressing the less relevant language in any given situation. Therefore, despite both musicianship and bilingual experiences improving brain function relative to controls, the aspects of brain function they improve are different. As games and activities for the purpose of “brain training” become popular, the researchers hope this work will help with testing the effectiveness of brain training.
Citation: Moreno S, Wodniecka Z, Tays W, Alain C, Bialystok E (2014) Inhibitory Control in Bilinguals and Musicians: Event Related Potential (ERP) Evidence for Experience-Specific Effects. PLoS ONE 9(4): e94169. doi:10.1371/journal.pone.0094169
Images are Figures 1 and 2 from the article.
The post Music, Language, and the Brain: Are You Experienced? appeared first on EveryONE.
This menace may leap out at you in the subway or find you when you’re tucked away, safe in your bed; it might follow you when you’re driving down the street or running at the gym. Hand sanitizer can’t protect you, and once you’re afflicted, the road to recovery can be a long one. However, this isn’t the Bubonic plague or the common cold—instead, the dreaded earworms!
Derived from the German word ohrwurm, which translates literally to “ear-worm,” an earworm commonly refers to a song, or a snippet of a song, that gets stuck in your head. Earworms can occur spontaneously and play in our heads in a seemingly infinite loop. Think of relentlessly catchy tunes, such as “Who Let the Dogs Out?,” “It’s a Small World,” or any Top 40 staple. An estimated 90% of people fall prey to an earworm at least once a week and most are not bothersome, but some can cause distress or anxiety. And yet, despite the earworm’s ubiquity, very little is known about how we react to this phenomenon. With the assistance of BBC 6 Music, the authors of a recent PLOS ONE study set out to connect the dots between how we feel about and deal with these musical maladies.
Researchers drew upon the results of two existing surveys, each focusing on different aspects of our feelings about earworms. In the first, participants were asked to reflect on whether they felt positively or negatively toward earworms, and whether these feelings affected how they responded to them. The second survey focused on how effective participants felt they were in dealing with songs stuck in their heads. Responses to both surveys were given free form.
To make sense of the variety of data each survey provided, the authors coded participant responses and identified key patterns, or themes. Two researchers developed their own codes and themes, compared notes and developed a list, as represented below.
The figure above represents responses from the first survey, in which participants assigned a negative or positive value to their earworm experiences and described how they engaged with the tune. The majority didn’t enjoy earworms and assigned a negative value to the experience. These responses were clustered by a common theme, which the researchers labelled “Cope,” and were associated with various attempts to get rid of the internal music. A significant number of participants reported using other music to combat their earworms.
Participants in the second survey, which focused on the efficacy of treating earworms, responded in a number of different ways. Those whose way of dealing was effective often fell into one of two themes: “Engage” or “Distract.” Those that engaged with their earworms did so by, for example, replaying the song; those that wanted distraction often utilized other songs. Most opted to engage.
Ultimately, the researchers concluded that our relationships with these musical maladies can be rather complex. Yet, whether you embrace these catchy tunes or try to tune them out, the way we feel about earworms is often connected to how we deal with them.
Want to put in your two cents? You can tell the authors how you deal with earworms at their website, Earwormery. For more on this musical phenomenon, listen to personal anecdotes on Radiolab, read about earworm anatomy at The New Yorker, or dig deeper in the study.
Citation: Williamson VJ, Liikkanen LA, Jakubowski K, Stewart L (2014) Sticky Tunes: How Do People React to Involuntary Musical Imagery? PLoS ONE 9(1): e86170. doi:10.1371/journal.pone.0086170
Figure 1 from the paper.
The post Infectious Earworms: Dealing with Musical Maladies appeared first on EveryONE.
Do we really sing as well as we all think we do in the shower? Exactly how complex is Mel Taylor’s drumming in Wipeout? How we hear things is important not just for the field of music research, but also for the fields of psychology, neurology, and physics. There is a lot more to how we perceive sound than sound waves just hitting our ears. PLOS ONE recently published two research articles exploring music perception. One article focuses on how perceiving a sound as higher or lower in pitch—the frequency of a musical note relative to other notes—than another sound is influenced by different instruments and the listener’s musical training. The other explores rhythm, including musicians’ perception of rhythmic complexity.
Pitch is the frequency of a sound, commonly described using the words high or low. The quality of tone, or timbre, of an instrument, on the other hand, is less easy to define. Tone quality is often described using words like warm, bright, sharp, and rich, and can cover several frequencies. In the study presented in “The Effect of Instrumental Timbre on Interval Discrimination,” psychology researchers designed an experiment to determine if it is more difficult to perceive differences in musical pitch when played by different instruments. They also tested whether musicians are better at discriminating pitch than non-musicians (you can test yourself with this similar version) to see if musical training changes how people perceive pitch and tone.
The researchers compared the tones of different instruments, using flute, piano, and voice, along with pure tones, or independent frequencies not coming from any instrument. As you can see from the figure above, each instrument has a different frequency range, the pure tone being the most localized or uniformly “colored.” Study participants were given two choices, each choice with two pitches, and decided which set of pitches they thought were the most different from each other; sometimes they compared different instruments or tone qualities and sometimes, the same.
The researchers compared the participants’ answers and found that changes in tone quality influenced which set of pitches participants thought were the most different from each other. Evaluation of the different timbres showed that musicians were the most accurate at defining the pitch interval with pure tones, despite their training in generally instrumental tones. Non-musicians seemed to be the most accurate with both pure and piano tones, though the researchers noted this might be less reliable because non-musicians had a tendency to choose instrumental tones in general. Interestingly, both groups were faster at the pitch discrimination task when pure tones were used and musicians were better at the task than non-musicians. Everyone chose pitch intervals more accurately as the differences between the pitches became larger and more obvious.
Another group of researchers tested how we perceive syncopation, defined as rhythmic complexity, in their research presented in “Syncopation and the Score” by performing an experiment playing different rhythms to musicians. They asked musicians to rank the degree of complexity of each rhythm.
The study was limited, with only ten participants, but in general, the rhythm patterns thought to be the most complex on paper were also perceived as the most complex when the participants listened to them. However, playing the same patterns in a different order sometimes caused listeners to think they were hearing something more or less syncopated. The authors suggest that a rhythm pattern’s perceived complexity depends upon the rhythm patterns played before and after it.
Both research studies highlight the intersection of music and music perception. We don’t need to be musicians to know that music can play tricks on our ears. It may be that some of us are less susceptible than others to these tricks, but even trained musicians can be fooled. Look here for more research on music perception.
Zarate JM, Ritson CR, Poeppel D (2013) The Effect of Instrumental Timbre on Interval Discrimination. PLoS ONE 8(9): e75410. doi:10.1371/journal.pone.0075410
Song C, Simpson AJR, Harte CA, Pearce MT, Sandler MB (2013) Syncopation and the Score. PLoS ONE 8(9): e74692. doi:10.1371/journal.pone.0074692
Image: Spectrograms of four tones – Figure 1A from Zarate JM, Ritson CR, Poeppel D (2013) The Effect of Instrumental Timbre on Interval Discrimination. PLoS ONE 8(9): e75410. doi:10.1371/journal.pone.0075410
Music may be the newest addition to a science communicator’s toolbox. A PLOS ONE paper published today describes an algorithm that represents terabytes of microbial and environmental data in tunes that sound remarkably like modern jazz.
“Microbial bebop”, as the authors describe it, is created using five years’ worth of consecutive measurements of ocean microbial life and environmental factors like temperature, dissolved salts and chlorophyll concentrations. These diverse, extensive data are only a subset of what scientists have been recording at the Western Channel Observatory since 1903.
As first author Larsen explained to the Wired blogs, “It’s my job to take complex data sets and find ways to represent that data in a way that makes the patterns accessible to human observations. There’s no way to look at 10,000 rows and hundreds of columns and intuit what’s going on.”
Each of the four compositions in the paper is derived from the same set of data, but highlights different relationships between the environmental conditions of the ocean and the microbes that live in these waters.
“There are certain parameters like sunlight, temperature or the concentration of phosphorus in the water that give a kind of structure to the data and determine the microbial populations. This structure provides us with an intuitive way to use music to describe a wide range of natural phenomena,” explains Larsen in an Argonne National Laboratories article.
Speaking to Living on Earth, Larsen describes how their music highlights the relationship between different kinds of data. “In most of the pieces that we have posted, the melody is derived from a numerical measurement, such that the lowest measure is the lowest note and the highest measure is the highest note. The other component is the chords. And the chords map to a different component of the data.”
As a result, the music generated from microbial abundance data played to chords generated from phosphorus concentration data will sound quite different from the same microbial data played to chords derived from temperature data.
“Songs themselves probably are never going to actively replace, you know, the bar graph for data analysis, but I think that this kind of translation of complex data into a very accessible format is an opportunity to lead people who probably aren’t highly aware of the importance of microbial ecology in the ocean, and give them a very appealing entry into this kind of data”, explained Larsen in the same interview with Living on Earth.
Though their primary intent was to create novel way to symbolize the interactions of microbes in the ocean, the study also suggests that microbial bebop may eventually have applications in crowd-sourcing solutions to complex environmental issues.
For further reading, a PLOS ONE paper in 2010 demonstrated that the metaphors used to explain a problem could have a powerful impact on people’s thoughts and decisions when designing solutions. Could re-phrasing complex environmental data in music lead to solutions we haven’t heard yet? As you ponder the question, listen to some microbial bebop!
Citations: Larsen P, Gilbert J (2013) Microbial Bebop: Creating Music from Complex Dynamics in Microbial Ecology. PLoS ONE 8(3): e58119. doi:10.1371/journal.pone.0058119
Thibodeau PH, Boroditsky L (2011) Metaphors We Think With: The Role of Metaphor in Reasoning. PLoS ONE 6(2): e16782. doi:10.1371/journal.pone.0016782
Image: sheet music by jamuraa on Flickr