jesse_the_k: Head inside a box, with words "Thinking inside the box" scrawled on it. (thinking inside the box)
[personal profile] jesse_the_k
I eagerly anticipate every new post by Mel Chau although half of them discuss pedagogic philosophy above my head. The other half include glorious insights into her intersectional experiences as Asian-American, engineer, deaf & Deaf, female, teacher. This essay addresses the high cognitive load lip-reading imposes as well as the utility of residual hearing. A taste:

In my case — and the case of my deaf friends who prefer to not use residual hearing when there’s another access option available — we’re patching across multiple languages/modalities on a time delay, and that triggers two competing thought streams. If you want to know what that feels like, try to fluently type a letter to one friend while speaking to another on a different topic. Physically, you can do it — your eyeballs and hands are on the written letter, your ears and mouth are in the spoken conversation — but your brain will struggle. Don’t switch back and forth between them (which is what most people will immediately start to do) — actually do both tasks in parallel. It’s very, very hard. In our case, one stream is lossy auditory English as the speaker utters something, and the other is clear written English or clear ASL visuals some seconds behind it. (Assuming your provider is good. Sometimes this data stream is … less clear and accurate than one might like.) Merging/reconciling the two streams is one heck of a mental load… and since we can shut off the lossy auditory English as “noise” rather than “signal,” sometimes we do.

"Oral deaf audio MacGyver: identifying speakers"

http://melchau-feed.dreamwidth.org/24279.html

Or

http://blog.melchua.com/?p=5184 although half of them discuss pedagogic philosophy above my head. The other half include glorious insights into her intersectional experiences as Asian-American, engineer, deaf & Deaf, female, teacher. This essay addresses the high cognitive load lip-reading imposes as well as the utility of residual hearing. A taste:

In my case — and the case of my deaf friends who prefer to not use residual hearing when there’s another access option available — we’re patching across multiple languages/modalities on a time delay, and that triggers two competing thought streams. If you want to know what that feels like, try to fluently type a letter to one friend while speaking to another on a different topic. Physically, you can do it — your eyeballs and hands are on the written letter, your ears and mouth are in the spoken conversation — but your brain will struggle. Don’t switch back and forth between them (which is what most people will immediately start to do) — actually do both tasks in parallel. It’s very, very hard. In our case, one stream is lossy auditory English as the speaker utters something, and the other is clear written English or clear ASL visuals some seconds behind it. (Assuming your provider is good. Sometimes this data stream is … less clear and accurate than one might like.) Merging/reconciling the two streams is one heck of a mental load… and since we can shut off the lossy auditory English as “noise” rather than “signal,” sometimes we do.

"Oral deaf audio MacGyver: identifying speakers"

http://melchau-feed.dreamwidth.org/24279.html

Or

http://blog.melchua.com/?p=5184

Popular Tags

Subscription Filters

May 2025

S M T W T F S
    1 2 3
45 67 8910
111213 14 151617
18192021222324
25262728293031

Style Credit

Powered by Dreamwidth Studios
Page generated Friday, May 16th, 2025 05:56 am