Kuhl Constructs: How Babies Form Foundations for Language

thumbsup_127461614Years ago, I was captivated by an adorable baby on the front cover of a book, “The Scientist in the Crib: What Early Learning Tells Us About the Mind,” written by a trio of research scientists including Alison Gopnik, Andrew Meltzoff and Patricia Kuhl.

At the time, I was simply interested in how babies learn about their worlds, how they conduct experiments, and how this learning could impact early brain development. I did not realize the extent to which interactions with family, caregivers, society and culture could shape the direction of a young child’s future.

Now, as a speech-language pathologist in early intervention in Massachusetts, more cognizant of the myriad of factors that shape a child’s cognitive, social-emotional, language, and literacy development, I have been absolutely delighted to discover more of the work of Kuhl, a distinguished speech-language pathologist at The University of Washington. So, last spring, when I read that she was going to present “Babies’ Language Skills” as one of a two-part seminar series sponsored by the Mind, Brain, and Behavior Annual Distinguished Lecture Series at Harvard University, I was thrilled to have the opportunity to attend. Below are some highlights from that experience and the questions it has since sparked for me.

Who is Patricia Kuhl and how has her work reshaped our knowledge about how babies learn language?

Kuhl, co-director of the University of Washington’s Institute for Learning and Brain Sciences, has been internationally recognized for her research on early language and brain development, and for her studies on how young children learn. In her most recent research experiments, she’s been using magnetoencephalography (MEG)—a relatively new neuroscience technology that measures magnetic fields generated by the activity of brain cells—to investigate how, where and with what frequency babies from around the world process speech sounds in the brain when they are listening to adults speak in their native and non-native languages.

Not only does Kuhl’s research point us in the direction of how babies learn to process phonemes, the sound units upon which many languages are built, but it is part of a larger body of studies looking at infants across languages and cultures that has revolutionized our understanding of language development over the last half of the 20th Century—leading to, as Kuhl puts it in a 2000 paper on language acquisition she wrote for the Proceedings of the National Academy of Sciences, “a new view of language acquisition, that accounts for both the initial state of linguistic knowledge in infants, and infants’ extraordinary ability to learn simply by listening to their native language.”

What is neuroplasticity and how does it underlie child development?

Babies are born with 100 billion neurons, about the same as the number of stars in the Milky Way. In “The Whole Brain Child,” Daniel Siegel and Tina Payne Bryson explain that when we undergo an experience, these brain cells respond through changes in patterns of electrical activity—in other words, they “fire” electrical signals called “action potentials.”

In a child’s first years of life, the brain exhibits extraordinary neuroplasticity, refining its circuits in response to environmental experiences. Synapses—the sites of communication between neurons—are built, strengthened, weakened and pruned away as needed. Two short videos from the Center on the Developing Child at Harvard, “Experiences Build Brain Architecture” and “Serve and Return Interaction Shapes Brain Circuitry,” nicely depict how some of this early brain development happens.

Since brain circuits organize and reorganize themselves in response to an infant’s interactions with his or her environment, exposing babies to a variety of positive experiences (such as talking, cuddling, reading, singing and playing in different environments) not only helps tune babies in to the language of their culture, but also builds a foundation for developing the attention, cognition, memory, social-emotional, language and literacy, and sensory and motor skills that will help them reach their potential later on.

When and how do babies become “language-bound” listeners?

In her 2011 TED talk, “The Linguistic Genius of Babies,” Kuhl discusses how babies under 8 months of age from different cultures can detect sounds in any language from around the world, but adults cannot do this:

So when exactly do babies go from being “citizens of the world,” as Kuhl puts it, to becoming “language-bound” listeners, specifically focused on the language of their culture?”

Between 8-10 months of age, when babies are trying to master the sounds used in their native language, they enter a critical period for sound development. Kuhl explains that in one set of experiments, she compared a group of babies in America learning to differentiate the sounds “/Ra/” and “/La/,” with a group of babies in Japan. Between 6-8 months, the babies in both cultures recognized these sounds with the same frequency. However, by 10-12 months, after multiple training sessions, the babies in Seattle, Washington, were much better at detecting the “/Ra/-/La/” shift than were the Japanese babies.

Kuhl explains these results by suggesting that babies “take statistics” on how frequently they hear sounds in their native and non-native languages. Because “/Ra/” and “/La/” occur more frequently in the English language, the American babies recognized these sounds far more frequently in their native language than the Japanese babies. Kuhl believes that the results in this study indicate a shift in brain development, during which babies from each culture are preparing for their own languages and becoming “language-bound” listeners.

In what ways are nurturing interactions with caregivers more valuable to babies’ early language development than interfacing with technology?

If parents, caregivers and other children can help mold babies’ language development simply by talking to them, it is tempting to ask whether young babies can learn language by listening to the radio, watching television, or playing on their parents’ mobile devices. I mean, what could be more engaging than the brightly-colored screens of the latest and greatest smart phones, iPads, iPods, and computers? They’re perfect for entertaining babies. In fact, some babies and toddlers can operate their parents’ devices before even having learned how to talk.

However, based on her research, Kuhl states that young babies cannot learn language from television and it is necessary for babies to have lots of face-to-face interaction to learn how to talk. In one interesting study, Kuhl’s team exposed 9 month old American babies to Mandarin in various forms—in person interactions with native Mandarin speakers vs. audiovisual or audio recordings of these speakers—and then looked at the impact of this exposure on the babies’ ability to make Mandarin phonetic contrasts (not found in English) at 10-12 months of age.

Strikingly, 12 laboratory visits featuring in-person interactions with the native Mandarin speakers were sufficient to teach the American babies how to distinguish the Mandarin sounds as well as Taiwanese babies of the same age. However, the same number of lab visits featuring the audiovisual or audio recordings made no impact. American babies exposed to Mandarin through these technologies performed the same as a control group of American babies exposed to native English speakers during their lab visits.

Kuhl believes that this is primarily because a baby’s interactions with others engages the social brain, a critical element for helping children learn to communicate in their native and non-native languages. In other words, learning language is not simply a technical skill that can be learned by listening to a recording or watching a show on a screen. Instead, it is a special gift that is handed down from one generation to the next.

Language is learned through talking, singing, storytelling, reading and many other nurturing experiences shared between caregiver and child. Babies are naturally curious; they watch every movement and listen to every sound they hear around them. When parents talk, babies look up and watch their mouth movements with intense wonder. Parents respond in turn, speaking in “motherese,” a special variant of language designed to bathe babies in the sound patterns and speech sounds of their native language. Motherese helps babies hear the “edges” of sound, the very thing that is difficult for babies who exhibit symptoms of dyslexia and auditory processing issues later on.

Over time, by listening to and engaging with the speakers around them, babies build sound maps, which set the stage for them to be able to say words and learn to read later on. In fact, based on years of research, Kuhl has discovered that babies’ abilities to discriminate phonemes at 7 months old is a predictor of future reading skills for that child at age 5, as noted in a Harvard Crimson article on Kuhl’s Harvard lecture series.

I believe that educating families about brain development, nurturing interactions, and the benefits and limits of technology is absolutely critical to helping families focus on what is most important in developing their children’s communication skills. I also believe that Kuhl’s work is invaluable in this regard. Not only has it focused my attention on how babies form foundations for language, but it has illuminated my understanding of how caregiver-child interactions help set the stage for babies to become language-bound learners.

Sarah Andrews Roehrich, MS, CCC-SLP, works in early intervention at the ThomAnne Sullivan Early Intervention Center in Lowell, Mass.

Note: This post was adapted from an original version published by E/I Balance, a community blog on mental health research, set up by the Conte Center at Harvard University.

 

Speech Therapy and Aging: Brain Plasticity and Cueing Hierarchies

4160835158_9c08b34dc2_z

(photo credit)

Given our knowledge of the plasticity of the brain, are we as clinicians or caregivers, able to help to develop new links with a behavioral model, by using gradated cueing hierarchies?  Could this low-tech and pharmaceutical-free form of treatment  have neurologically based implications for rehabilitation and adaptation in communicatively challenging settings?

Perhaps more testing with fMRI scans may be necessary to really prove the theory. Therapy approaches using cueing models have been well documented in the literature in speech therapy treatment for aphasia.  However, the way we use clinician originated cues can help create new links and expedite a broad area of cognitive and linguistic improvement, or maintain the functional status quo, unless we analyze the kinds of cues we are using and the amount of independence we are carefully eliciting from the client.

By looking at each task and cue needed on a continuum from simple to complex, concrete to abstract, you can construct a grid of where on the continuum the client functions and how you can provide a cue or help them provide their own cues for success.

The idea that the damaged axons and dendrites in the brain are looking for connections and stay active when the brain is activated, prompted me to want to create a cueing continuum (see http://carmichaellab.neurology.ucla.edu/integrated-view-neural-repair-after-stroke.) On the theory that the client can develop new pathways , if we always fill in the missing word or provide the first phoneme, then the client will never have to learn where to get it, via their own written word, for example.  But how do we get from writing the word for the client to having the client write the word in the air and say it? It all depends on the residual abilities, but the concept can be applied to everyone.

We have a 60 year old gentleman with TBI who is learning how to semantically cue himself to find a word. Initially, he had severe speech and cognitive impairments. Now, in conversation, he often uses circumlocution to get his point across. However, sometimes specific words are warranted, and this is difficult for him. He can sometimes spell the word aloud even though he cannot speak it. We had him do this several times with great success. Our next task was to remind him that he could do this to help himself.  Later we only asked him what he would like to do.  We are helping him build those dendritic links ( and learn to use a skill) by carefully reducing the amount of clinician prompting or cueing during the sessions and writing down the strategies for him to practice at home. Although there are many approaches to cueing, none of them seem to describe cueing in a continuum from most invasive to most independent. Many clinicians describe the cues as semantic or phonemic.  I found that there were nuances in cueing that I had learned over the years to allow the client to gradually become independent. When I had difficulty transmitting these ideas to my students, I created a loose continuum to mark where our clients fell given specific objectives, and how we could get their neurons to get closer together behaviorally if not actually by breaking the cues down.

Along with the goals we establish for our clients, no matter their abilities, we must always be evaluating their behavior and trying new materials and varied activities to facilitate language.

As we converse with others, we derive cues from the environment and from the people with whom we are speaking (that is part of the reason why conversation amongst the adult neurologically language-impaired looks better than when we test them by looking for specific words and longer utterances).

Our goal with cueing is to develop self-cues and elicit more language. A self-cue can be as basic as a gesture or a drawing, but if the client is doing it and communicating to me what he did for the weekend, then he has been successful. Often when the stress is lower or the focus is away from speaking, the words and incidental phrases flow more freely. The best reward is to see the expression on our client’s face when he says a few words effortlessly because he was engaged in the activity. But this is not we what we are trying to do. We are trying to give him real tools for those times he cannot utter a word.

When the client leaves the therapy room, we want him or her to be able to use their own skills, rather than rely on others. Since they may not be able to develop their own means of self-cueing, we include self-cue skill development as part of the therapy plan.  The client may or may not have the ability to provide his or her own cues, yet. But throughout the therapy and rehabilitation process, we work toward the skills of self-cuing no matter the level, such as writing, gesturing, drawing pictures, and talking about the item or activity with words that are available.

The Cueing Hierarchy Continuum is by no means linear, but will generally go from simple and most dependent to complex and independent. They follow the behavioral branches that may be used in clinic therapy logs. They are separated on my behavior grid in 3 categories: Clinician Assisted Cues, Clinician prompting (or reminding the client to use a strategy) and Self Correcting. This approach requires that the client learn about his strengths and how to implement them to improve what we would consider weaknesses. By identifying which cues are more dependent, we can be cognizant of allowing the client to work at a documented realistic level achieve the objective.

It is well documented that there is enough plasticity in almost any brain to stimulate, heal and renew brain function after a stroke or TBI. For cognitive loss during normal aging, the dementias and the progressive dementias, there is less clear documentation for which approaches are the most effective and pragmatic for our clients. However, similar principles can be used to establish functional objectives along with the family and caregivers.

How to develop skills? How to develop strategies for short and long-term functional success?  Sometimes, we spend the therapy session working on comprehension, word finding, writing and reading using a variety of materials.  But, if we don’t address what they are learning outside the therapy room, which they may visit one or two times a week, how will compensatory skills, adaptive skills and  new connections be utilized?   That will be the topic for next month’s post.

Betsy C. Schreiber, MMS, CCC-SLP, received a BA in Psychology and MMS Master of Medical Science in Speech Pathology from Emory University in Atlanta, Georgia. Her CCC was earned during the 3 years she worked at Hitchcock Rehabilitation Center in Aiken, South Carolina where she had the opportunity to learn about NDT and Sensory Integration with the original, Jane Ayres, working with LD and CP children and neurologically impaired adults. She is currently a clinical supervisor at Ladge Speech and Hearing Clinic at LIU/Post on Long Island, and a partner at Hope 4 Speech Associates, P.C. She has also served as an ASHA Mentor and hopes to participate in ASHA’s  Political Action Committee in the coming year. She is an affiliate of ASHA Special Interest Groups 2, Neurophysiology and Neurogenic Speech and Language Disorders, and 18, Telepractice.