The Effectiveness of Language Facilitation

 

 

natural talk

A while back, I posted on the ABCs of ABA. Within that post, I described the basics of ABA, a method of therapy that I believe is often a bit misunderstood. I also promised to follow that post with a more thorough description of the shades of grey that exist within the broader field of ABA.

Before I do that, though, I want to touch on the effectiveness of an approach that often seems to be the very opposite of ABA: indirect language stimulation. And before I do that (hang with me here), I’m going to briefly explain the idea of a continuum of naturalness that exists within the field of speech-language pathology. This term was first coined by Marc Fey in 1986 in “Language intervention with young children,” and I think it is a wonderful way to help us wrap our minds around the variables that exist when we think about the various methods of therapy.

arrow2
The ends of this continuum represent the relative naturalness of a treatment context. On one end of the continuum, we have indirect language stimulation approaches. These are highly natural, often embedded within the child’s daily routine, tend to be unstructured, and are built on the idea of being responsive to the child. On the other end of the continuum, we have highly structured ABA approaches, which tend to be highly decontextualized (*not* in the context of daily activities and play), very structured, and highly adult-directed.

In this post, I’m going to cover the left hand side of this continuum: indirect language stimulation. In a nutshell, this approach to language intervention involves describing what a little one is seeing, doing, and feeling. I’ve described different techniques within this broader method before, in various posts such as All Kinds of Talk, Self Talk & Parallel Talk, and Expansion and Extension. As you use these techniques, you are providing models of language that are a match for the child’s language level. So, if a baby mainly points and vocalizes, you use one and two word phrases; if toddler uses one and two word phrases, you use three and four; if a preschooler uses short sentences without grammar, you respond with longer sentences with appropriate grammar (you get the idea, right?).

These techniques are generally used in the context of on-going activities that happen every day, and are used in a way that is responsive to the child. In other words, you watch what the child is doing, listen to what she is saying, observe what she is watching, and then you respond to that. Watch. Listen. Observe. Describe. Put it all together, and general language stimulation looks a little something like this.

It pretty much looks like nothing is happening, right? Just a mom and her child having a snack. This is what it should look like! It’s natural- that’s why it’s on the far left hand side of the continuum of naturalness. But there is more going on than meets the eye. Notice how the language is simple, and related to the activity at hand. Also notice mom’s responsiveness–language models are provided in response to the child’s utterances (Child: “Please?” Mom: “You want apple.” “Apple please!”). And when the little one tries to get mom’s attention by saying “mmm,” again, mom responds with another “mmmm.” They go back and forth a few times–this is turn-taking, and within it lies the beginnings of conversation. Eventually, mom uses a language model directly related to the “mmmm”: “Yummy apple.”

One more example. This activity is a little more structured, but the approach used is the same. Notice how mom’s language is in response to the child’s language (Child: “Ride…” Adult: “You’re riding the bike!”) and take note of the fact what mom says is just slightly longer than the toddler’s language. And, as an additional bonus, observe how the child’s language changes– from one word sentences at the beginning, to a two-word phrase at the end of the clip. Indirect language stimulation doesn’t always work immediately in the moment like this…but it’s pretty cool when it does!

Despite the fact that indirect language stimulation looks quite simple, research shows that it can be very effective. As I described in All Kinds of Talk, research indicates that the more parents use conversational talk with their typically developing child, the larger that child’s vocabulary will be. When parents are responsive in their conversational interactions with their child, their child’s language grows.

Indirect language stimulation approaches have been shown to be effective for late talkers, too. In their article, Evidence-Based Language Intervention Approaches for Young Talkers, Finestack and Fey summarize the evidence in support of both general language stimulation and focused language stimulation. General language stimulation involves the techniques I just described in, well, a very general way. This means that there are no specific language targets (say, increasing verbs, or increasing nouns, or getting a child to use a specific type of two-word phrase). Instead, the goal is broad in nature: increase overall language skills. Finestack and Fey describe a randomized controlled trial (in other words, a well designed, scientific study) of a 12 week program that used general language stimulation (Robertson & Ellis Weismer, in Finestack and Fey, 2013). The researchers compared late-talking children who received general language stimulation to late-talkers who received no intervention and found that, compared to the children who received no intervention, children who received the intervention made more gains in vocabulary, intelligibility, and socialization. Importantly, the parents of the children who received intervention felt less stress. And who doesn’t want less stress in their life?!

Focused language stimulation is very similar to the general language stimulation except that it’s (you guessed it…) focused. The language models that are provided by adults are chosen specifically for that particular child. So, an adult might model mainly verbs if these are lacking in a child’s language. Or, the adult might model specific nouns. Or, the adult might model a specific type of early grammar marker, such as -ing (one of the earliest ways that children start marking verbs). This type of language stimulation, too, has been shown to be effective. Girolametto, et al, 1996 (in Finestack and Fey, 2013), taught parents to use focused language stimulation with their children. They compared the gains made the children of these parents to the gains made by children whose parents were not trained in use of these methods (don’t worry – the non-trained parents got trained at the end of the study, too!). By the end of the study, the children whose parents were trained in focused language stimulation had significantly larger and more diverse vocabularies, used more multi-word phrases, and had better phonology.

It’s important to note that general and focused language stimulation enjoy the most research support when used with late-talkers who don’t have any other delays. The research is mixed when it comes to the efficacy of these methods with children with more significant delays and disorders, such as those with autism or cognitive disorders. Because of this, having other tools in our toolbox is very important. This is where the rest of the continuum of naturalness becomes important – and where my passion for contextualized ABA approaches begins. But, that’s a post for another day. For today, we’ll stop here, secure in the knowledge that when we surround our typically developing children and late-talkers in language models, their language grows.

Finestack, L. and Fey, M. (2013). Evidence-Based Language Intervention Approaches for Young Talkers. In Rescorla & Dale, Eds. (2013). Late Talkers: Language Development, Interventions, and Outcomes

Becca Jarzynski, M.S., CCC-SLP, is a pediatric speech-language pathologist in Wisconsin. You can follow her blog, Child Talk, and on Facebook.

Kuhl Constructs: How Babies Form Foundations for Language

thumbsup_127461614Years ago, I was captivated by an adorable baby on the front cover of a book, “The Scientist in the Crib: What Early Learning Tells Us About the Mind,” written by a trio of research scientists including Alison Gopnik, Andrew Meltzoff and Patricia Kuhl.

At the time, I was simply interested in how babies learn about their worlds, how they conduct experiments, and how this learning could impact early brain development. I did not realize the extent to which interactions with family, caregivers, society and culture could shape the direction of a young child’s future.

Now, as a speech-language pathologist in early intervention in Massachusetts, more cognizant of the myriad of factors that shape a child’s cognitive, social-emotional, language, and literacy development, I have been absolutely delighted to discover more of the work of Kuhl, a distinguished speech-language pathologist at The University of Washington. So, last spring, when I read that she was going to present “Babies’ Language Skills” as one of a two-part seminar series sponsored by the Mind, Brain, and Behavior Annual Distinguished Lecture Series at Harvard University, I was thrilled to have the opportunity to attend. Below are some highlights from that experience and the questions it has since sparked for me.

Who is Patricia Kuhl and how has her work reshaped our knowledge about how babies learn language?

Kuhl, co-director of the University of Washington’s Institute for Learning and Brain Sciences, has been internationally recognized for her research on early language and brain development, and for her studies on how young children learn. In her most recent research experiments, she’s been using magnetoencephalography (MEG)—a relatively new neuroscience technology that measures magnetic fields generated by the activity of brain cells—to investigate how, where and with what frequency babies from around the world process speech sounds in the brain when they are listening to adults speak in their native and non-native languages.

Not only does Kuhl’s research point us in the direction of how babies learn to process phonemes, the sound units upon which many languages are built, but it is part of a larger body of studies looking at infants across languages and cultures that has revolutionized our understanding of language development over the last half of the 20th Century—leading to, as Kuhl puts it in a 2000 paper on language acquisition she wrote for the Proceedings of the National Academy of Sciences, “a new view of language acquisition, that accounts for both the initial state of linguistic knowledge in infants, and infants’ extraordinary ability to learn simply by listening to their native language.”

What is neuroplasticity and how does it underlie child development?

Babies are born with 100 billion neurons, about the same as the number of stars in the Milky Way. In “The Whole Brain Child,” Daniel Siegel and Tina Payne Bryson explain that when we undergo an experience, these brain cells respond through changes in patterns of electrical activity—in other words, they “fire” electrical signals called “action potentials.”

In a child’s first years of life, the brain exhibits extraordinary neuroplasticity, refining its circuits in response to environmental experiences. Synapses—the sites of communication between neurons—are built, strengthened, weakened and pruned away as needed. Two short videos from the Center on the Developing Child at Harvard, “Experiences Build Brain Architecture” and “Serve and Return Interaction Shapes Brain Circuitry,” nicely depict how some of this early brain development happens.

Since brain circuits organize and reorganize themselves in response to an infant’s interactions with his or her environment, exposing babies to a variety of positive experiences (such as talking, cuddling, reading, singing and playing in different environments) not only helps tune babies in to the language of their culture, but also builds a foundation for developing the attention, cognition, memory, social-emotional, language and literacy, and sensory and motor skills that will help them reach their potential later on.

When and how do babies become “language-bound” listeners?

In her 2011 TED talk, “The Linguistic Genius of Babies,” Kuhl discusses how babies under 8 months of age from different cultures can detect sounds in any language from around the world, but adults cannot do this:

So when exactly do babies go from being “citizens of the world,” as Kuhl puts it, to becoming “language-bound” listeners, specifically focused on the language of their culture?”

Between 8-10 months of age, when babies are trying to master the sounds used in their native language, they enter a critical period for sound development. Kuhl explains that in one set of experiments, she compared a group of babies in America learning to differentiate the sounds “/Ra/” and “/La/,” with a group of babies in Japan. Between 6-8 months, the babies in both cultures recognized these sounds with the same frequency. However, by 10-12 months, after multiple training sessions, the babies in Seattle, Washington, were much better at detecting the “/Ra/-/La/” shift than were the Japanese babies.

Kuhl explains these results by suggesting that babies “take statistics” on how frequently they hear sounds in their native and non-native languages. Because “/Ra/” and “/La/” occur more frequently in the English language, the American babies recognized these sounds far more frequently in their native language than the Japanese babies. Kuhl believes that the results in this study indicate a shift in brain development, during which babies from each culture are preparing for their own languages and becoming “language-bound” listeners.

In what ways are nurturing interactions with caregivers more valuable to babies’ early language development than interfacing with technology?

If parents, caregivers and other children can help mold babies’ language development simply by talking to them, it is tempting to ask whether young babies can learn language by listening to the radio, watching television, or playing on their parents’ mobile devices. I mean, what could be more engaging than the brightly-colored screens of the latest and greatest smart phones, iPads, iPods, and computers? They’re perfect for entertaining babies. In fact, some babies and toddlers can operate their parents’ devices before even having learned how to talk.

However, based on her research, Kuhl states that young babies cannot learn language from television and it is necessary for babies to have lots of face-to-face interaction to learn how to talk. In one interesting study, Kuhl’s team exposed 9 month old American babies to Mandarin in various forms—in person interactions with native Mandarin speakers vs. audiovisual or audio recordings of these speakers—and then looked at the impact of this exposure on the babies’ ability to make Mandarin phonetic contrasts (not found in English) at 10-12 months of age.

Strikingly, 12 laboratory visits featuring in-person interactions with the native Mandarin speakers were sufficient to teach the American babies how to distinguish the Mandarin sounds as well as Taiwanese babies of the same age. However, the same number of lab visits featuring the audiovisual or audio recordings made no impact. American babies exposed to Mandarin through these technologies performed the same as a control group of American babies exposed to native English speakers during their lab visits.

Kuhl believes that this is primarily because a baby’s interactions with others engages the social brain, a critical element for helping children learn to communicate in their native and non-native languages. In other words, learning language is not simply a technical skill that can be learned by listening to a recording or watching a show on a screen. Instead, it is a special gift that is handed down from one generation to the next.

Language is learned through talking, singing, storytelling, reading and many other nurturing experiences shared between caregiver and child. Babies are naturally curious; they watch every movement and listen to every sound they hear around them. When parents talk, babies look up and watch their mouth movements with intense wonder. Parents respond in turn, speaking in “motherese,” a special variant of language designed to bathe babies in the sound patterns and speech sounds of their native language. Motherese helps babies hear the “edges” of sound, the very thing that is difficult for babies who exhibit symptoms of dyslexia and auditory processing issues later on.

Over time, by listening to and engaging with the speakers around them, babies build sound maps, which set the stage for them to be able to say words and learn to read later on. In fact, based on years of research, Kuhl has discovered that babies’ abilities to discriminate phonemes at 7 months old is a predictor of future reading skills for that child at age 5, as noted in a Harvard Crimson article on Kuhl’s Harvard lecture series.

I believe that educating families about brain development, nurturing interactions, and the benefits and limits of technology is absolutely critical to helping families focus on what is most important in developing their children’s communication skills. I also believe that Kuhl’s work is invaluable in this regard. Not only has it focused my attention on how babies form foundations for language, but it has illuminated my understanding of how caregiver-child interactions help set the stage for babies to become language-bound learners.

Sarah Andrews Roehrich, MS, CCC-SLP, works in early intervention at the ThomAnne Sullivan Early Intervention Center in Lowell, Mass.

Note: This post was adapted from an original version published by E/I Balance, a community blog on mental health research, set up by the Conte Center at Harvard University.