As someone who is hopeless at learning languages, I have a deep admiration for those pick up new ones with seeming ease. I am even more impressed by those who provide simultaneous translations, able to achieve the incredible feat of listening to a stream of words entering their heads in one language, translating it in their brains, and then sending the message out in a different language, all the while having a fresh stream of words entering.
But the apex of my admiration is reserved for those who simultaneously translate into sign language, because that involves having the brain also shift modes from verbal to hand gestures, surely adding a significant new layer of complexity.
During Hurricane Sandy, one such translator Lydia Callis captured people’s attention with her highly animated and expressive facial expressions as she translated the words of the New York City mayor and his officials during a press conference.
I have observed simultaneous verbal-to-verbal translators in action and noted that their faces often do seem to convey the emotions evoked by the original speaker, suggesting that theirs was not a merely mechanical technique. But what I had not known until now was that the facial expressions of verbal-to-sign translators are an essential component of the translation process and not merely an idiosyncratic add-on. Cord Jefferson points to a paper on American Sign Language (ASL) that explains the process:
Mouth shape, eye-brow height, and other face/head movements are a required part of ASL, and identical hand movements may have different meanings depending on the face/head. Facial expressions change the meaning of adjectives (e.g., color intensity or distance magnitude) or convey adverbial information (e.g., carelessly or with relaxed enjoyment). The head/face indicates important grammar information about phrases … A sequence of signs may have different meanings, depending on the head/face; e.g., the ASL sentence “JOHN LOVE MARY” without facial expression means: “John loves Mary.” With a yes/no facial expression, it indicates “Does John love Mary?” With a negative expression and headshake added during “LOVE MARY,” then the same sequence of signs indicates “John doesn’t love Mary.” (Facial expressions are timed to co-occur with hand movements for signs during specific parts of a sentence.) Further, ASL signers also use facial expressions to convey emotional subtext. Thus, facial expressions are essential to the meaning of ASL sentences.
Wow. My admiration has increased even more.
jamessweet says
I went to Rochester Institute of Technology, which shares a campus with the National Technical Institute for the Deaf (NTID), and I still live in Rochester, where there is quite a prevalent deaf community, and sign language translators are a common sight. Yeah, it’s pretty impressive.
I saw PZ speak at RIT with a sign language translator. She hung in there, godblessher ;D
Corvus illustris says
You haven’t seen the facial part of ASL in action until you watch it in a context for which it was really not designed: a (small-format, fortunately) section of sophomore differential equations. The public university from which I retired supplied English-to-ASL translators for sections in which deaf students (even one) were enrolled, so I had the experience. The interpreter was superb, and taught me enough technique so that between the blackboard, the gestures and the interpretation we could be sure the content was getting through. (She had no technical background.)
But still--the facial expressions! Maybe emoting about eigenvalues was part of my job description, but I just never previously realized it.
unnullifier says
As someone who has trivial hearing impairments (low frequency deafness in one ear) it never really occurred to me that facial expression would be incorporated into sign language. However, when I learned that, it immediately made sense: without facial expression to carry tone, then a deaf person “listening” to sign language would be like a person with normal hearing listening to a computer try to convert text to speech.
Marcus Ranum says
She’s insanely good. I learned a bit of ASL (more than a bit, actually) when I worked at Gallaudet, and it’s a really really cool system. It’s not just hand-gestures; the facial expressions and relative movements are also crucial. For example (“can I help you?”) is the hand-sign for “help” plus arched eyebrows and an expression to indicate it’s a question plus the hand-sign moving from “me” toward “you” to indicate the direction the help is going.
cafink says
I love this clip of the Family Guy cast messing with the sign language interpreter at Comic Con:
Funkopolis says
The facial expressions are practically more important than the actual signs. Have you ever watched two people signing to each other? They’re looking each other in the eyes, not at the each other’s hands.
Kimpatsu says
Just to be pedantic, Mano, translation is written. Spoken translation is called interpreting. Other languages make the same distinction, too.
As a translator, I know this. cheers.
Mano Singham says
Really? I was not aware of the distinction. Thanks.
Tim says
I, too, was impressed by Lydia Callis’ abilities. I was also impressed that New York would savvy enough to include a sign language interpreter in their press conferences.
I am not hearing impaired, nor do I know anyone who is deaf, but I’ve also been surprised that more government bodies (like, say, the White House) does not include signer language interpreters for press conferences.
The difference was especially striking to me in the run-up to Sandy.
Mano Singham says
Yes, the White House really should set the standard by requiring a sign interpreter for all major events.