Deaf MacGyver: Juggling visual & auditory inputs
Monday, October 10th, 2016 10:36 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
In my case — and the case of my deaf friends who prefer to not use residual hearing when there’s another access option available — we’re patching across multiple languages/modalities on a time delay, and that triggers two competing thought streams. If you want to know what that feels like, try to fluently type a letter to one friend while speaking to another on a different topic. Physically, you can do it — your eyeballs and hands are on the written letter, your ears and mouth are in the spoken conversation — but your brain will struggle. Don’t switch back and forth between them (which is what most people will immediately start to do) — actually do both tasks in parallel. It’s very, very hard. In our case, one stream is lossy auditory English as the speaker utters something, and the other is clear written English or clear ASL visuals some seconds behind it. (Assuming your provider is good. Sometimes this data stream is … less clear and accurate than one might like.) Merging/reconciling the two streams is one heck of a mental load… and since we can shut off the lossy auditory English as “noise” rather than “signal,” sometimes we do.
"Oral deaf audio MacGyver: identifying speakers"
http://melchau-feed.dreamwidth.org/24279.html
Or
http://blog.melchua.com/?p=5184 although half of them discuss pedagogic philosophy above my head. The other half include glorious insights into her intersectional experiences as Asian-American, engineer, deaf & Deaf, female, teacher. This essay addresses the high cognitive load lip-reading imposes as well as the utility of residual hearing. A taste:
In my case — and the case of my deaf friends who prefer to not use residual hearing when there’s another access option available — we’re patching across multiple languages/modalities on a time delay, and that triggers two competing thought streams. If you want to know what that feels like, try to fluently type a letter to one friend while speaking to another on a different topic. Physically, you can do it — your eyeballs and hands are on the written letter, your ears and mouth are in the spoken conversation — but your brain will struggle. Don’t switch back and forth between them (which is what most people will immediately start to do) — actually do both tasks in parallel. It’s very, very hard. In our case, one stream is lossy auditory English as the speaker utters something, and the other is clear written English or clear ASL visuals some seconds behind it. (Assuming your provider is good. Sometimes this data stream is … less clear and accurate than one might like.) Merging/reconciling the two streams is one heck of a mental load… and since we can shut off the lossy auditory English as “noise” rather than “signal,” sometimes we do.
"Oral deaf audio MacGyver: identifying speakers"
http://melchau-feed.dreamwidth.org/24279.html
Or