Researchers discover a lack of lullabies and other live-music stimuli.
While past research has shown that speech plays a critical role in children’s language development, less is known about the music that infants hear—and the role it plays in their brain development.
A new University of Washington study compares the amount of music and the amount of speech that children hear in infancy. Results showed that infants hear more spoken language than music, with the gap widening as the babies get older.
“We wanted to get a snapshot of what’s happening in infants’ home environments,” says Christina Zhao, ’15, an assistant research professor in the Department of Speech and Hearing Sciences. “Quite a few studies have looked at how many words babies hear at home, and they’ve shown that it’s the amount of infant-directed speech that’s important in language development. We realized we don’t know anything about what type of music babies are hearing, and how it compares to speech.”
Researchers analyzed data from daylong audio recordings collected in English-learning infants’ home environments at ages 6, 10, 14, 18 and 24 months. At every age, infants were exposed to more music from an electronic device than from an in-person source. This pattern was reversed for speech. While the percentage of speech intended for infants significantly increased with time, it stayed constant for music.
“We’re shocked at how little music is in these recordings,” says Zhao, who directs the Lab for Early Auditory Perception (LEAP), which is housed in the Institute for Learning & Brain Sciences (I-LABS). “The majority of music is not intended for babies. We can imagine these are songs streaming in the background or on the radio in the car. A lot of it is just ambient.”
This is a departure from the highly engaging, multi-sensory, movement-oriented music exposure that Zhao and her team had been studying in the lab. During these sessions, infants were given instruments and caregivers synchronized the babies’ movements to music. A control group of babies simply played.
“We did that twice,” Zhao says. “Both times, we saw the same result: that music intervention was enhancing infants’ neural responses to speech sounds. That got us thinking about what would happen in the real world. This study is the first step into that bigger question.”
Past studies have largely relied on parental reports to examine musical input in infants’ environments, but parents tend to overestimate the amount they talk or sing to their children. This study closes the gap by analyzing daylong auditory recordings. The recordings, originally created for a separate study, documented infants’ natural sound environment for up to 16 hours per day for two days at each age.
Researchers then crowdsourced the process of annotating the recording data through Zooniverse, a citizen-science web portal. When volunteers identified speech or music, they noted whether it came from a live person or an electronic source. Finally, they judged whether the speech or music was intended for a baby.
The researchers will expand their study to see if the results can be generalized to different cultures and populations. They also want to explore when music moments happen in infants’ lives. “We’re curious to see whether music input is correlated with any developmental milestones later on for these babies,” Zhao says. “We know speech input is highly correlated with later language skills. In our data, we see that speech and music input are not correlated—so it’s not like a family who tends to talk more will also have more music. We’re trying to see if music contributes more independently to certain aspects of development.”
The study was published in the journal Developmental Science.