Case Study: Interpreting for child development assessments


Some clinical units have unusual problems in providing effective language support. The interpreter manager often spends extra time meeting with these care teams to try to come up with high quality solutions. Consider the following case which addresses how to build a special interpreting protocol for providers who perform cognitive and development assessments on children. We will showcase a collaboration between an interpreter department, the local interpreter community, and an interdisciplinary team of child development specialists.

The interpreter services department received many complaints from the multi-disciplinary team at the childhood development institute about the quality of interpreting. In unusual unity, both staff and agency interpreters complained about ineffective and unsatisfactory process at the institute.

The care team was tasked with performing cognitive and development assessments on children from foreign language-speaking families, which is a particularly challenging undertaking for several reasons. Any one of these conditions would be hard enough to deal with, but in combination the challenge is daunting.


The first difficulty they faced is that children in general are difficult to keep engaged in assessment activities in a distracting new location with strange people in the room.


Second, young children refer constantly to their parent for validation of what is going on, so it is difficult to conduct an assessment without the parent injecting emotion or encouragement.


Third, the children who are referred for cognitive or developmental assessment are behaving outside of the norm for American children, which might mean that they have a hard time paying attention in the first place.

But when the child is NOT from the American mainstream, they may be behaving perfectly normally for their own family context. Which leads to the most difficult element of all: In order to perform objective assessments, the care team depends on measurements which are heavily dependent on how the child understands and responds to spoken language cues by the evaluator—and the children we are talking about have a different spoken language (and culture) than the children that the measurements were validated for.

Here is an example of a child development assessment:


The evaluator asks the Spanish-speaking child to act out the word “fly,” and waits for the interpreter to interpret the cue. The interpreter asks the evaluator whether she means the insect or the act of flying. In Spanish those two words have nothing in common, so the interpreter has to know which is meant. Of course, the evaluation question is built on the ambiguity of the word in English, so this question is invalid for a non-English-speaking child.

There are many more examples of questions for which the “right” answer would be unknown for a child brought up in other than a mainstream American home. There are similar problems with the developmental tests to gauge whether the child is behaving age-appropriately. Eating behaviors and other behaviors vary widely between cultures and families. No one has yet built equivalent assessments in any other languages or congruent with other child-raising practices, that we can find.

This particular interpreter department offered to partner with the providers to critically review the way that non-English-speaking children were evaluated, and particularly to set clear guidelines for how interpreters would interact with the process.



For the providers and interpreters was to make the assessments more valid.



Have well-trained and effective interpreters who would stay in role.



Have assessment protocols that allowed them to interpret effectively in both directions.

The project partnership lasted for a year and a half. The providers wrote a handbook for interpreters and carried out a training for 90 local interpreters. The handbook and training explained the development milestones and laid out the expectations that each member of the multi-disciplinary team had of the interpreter. Clear guidelines for the interpreter about what to do and what not to do during each phase of the assessment were laid out. For example, the interpreter was not to repeat instructions to the child, nor encourage the child, nor reward the child for correct answers by smiling or applauding. The interpreter was not to pick up objects that the child threw or dropped.

At the same time, the providers committed to many extra steps on their own part each time they assessed a non-English-speaking child. They would brief both the interpreter and parents on the various parts of the process, reassure the parents to allay their fears of negative outcomes of the assessment, and adjust any part of the assessment which was running up against a linguistic or cultural wall with the child. They even agreed to amend the final briefing of the parents on the assessment outcome to make it less formal and more relevant for the parents.

Please share your own experience in partnering with provider teams to solve persistent language support problems.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.