Imagine the following scenario. You are in foreign country where you don’t know the language. You might be fluent in English but that doesn’t count too much because most people in the country don’t know English. You are list in the underground trying to find your way back to the hotel. You are struggling to find a map with English on it. You are struggling to find someone to talk and be understood so that he can help you. For many people this is a quite stressfull situation. Personally, I find it a bit amusing and it’s one of the “remember the time when we… ” moments that you tell your friends about and laugh your herat out. However, if you are not on vacations and you are on business travel and have to catch a really important meeting, I could see myself stressed too.
You finally get your way around. Reflecting though on such an occasion you can see some resemblance with how mute people could fell like. You might be talking to other persons but they don’t understand you. You utilize whatever communication channels you have apart from your own voice and speech. On the other hand the persons you are talking too are like deaf people. They can hear but not understand you. They try to make sense of your body language and expressions in order to communicate. So, my question is could language be considered as a kind of disability? If so, could solutions for that problem help people with disabilities too?
Recently I came across an article with a demonstration video from Microsoft showing their new sppech recognition system which as they say improves accuracy to about 7 out of 8 words whereas todays systems have an average accuracy of 3 out of 4 words. This is great news I though at first. the demonstration though went even further. They also developed a system using Bing transation engine that could tranlsate English to almost any language real time. You can see the live captionind towrds the end of the speech being automatically translated to Mandarin. The most fun tough was that they also developed a Text to Speech system that can be traind from you and use your voice to read aloud text. Tying that to the already presented speech recongitiona dn translation engine they made a system that could translate what you say live and speak it out using your own voice. When this becomes available large scale on mobile phones htey could be like those translation devices you see in Star Trek. In the end of the show the Rick Rashid, head of Microsoft Research, who is giving the presenation they expect to break language barriers in a few years.
So, if language barriers are about to break in a few years we could also have broken till communication problems for deaf people at least. I mean, if we cna have a live captioning software on our smartphone deaf people could read in the screen what is been said by their friends when they are talking to them. to take it a step further why should you read this from your mobile? We ‘ve already seen Google glass project. Why would a deaf person have to be distracted by the phone’s screen to read what is been said. He could wear a pair of thsoe glasses and read them as bubbles in comics! What I don’t know yet is how easy would it be to type a text in your pair of glasses. This could be usefull for the other person of the scenario. A mute person with a typing interface fast and not tiring enough could also use glasses or any other similar device to type text that will be read aloud through a text to speech engine.
So getting back to my initial question. Is language a disability? It is certainly a barrier in out communcation. Could solutions for breaking the language barriers be used in other ways? Definately. That’s another example of assistive technologies becoming mainstream… (or is it the other way around?). This mainstreaming leads to wider target groups for these technologies and therefore lower costs and more effective solutions for people with disabilities.