Every fan of Star Trek remembers the famous “universal translator” in the series: a device which could understand the speaker’s words and then translate them accurately in any language. The world of technology has yet to work in order to manufacture something like that, but Microsoft has taken once step closer to this objective. In October, Rick Rashid, chief research officer at Microsoft held a presentation in Tainjin, China, to show the company’s progress on what aims to be one of the most revolutionary tools in the past years: the voice translator. The program uses a technology called “Deep Neural Networks”, which is similar in construction to the human brain. So, in theory, the speech translator understands the intonation of the speaker and then renders the text first in written, then spoken form. But can such a technology really work in 2012? And if so, how accurate are the translations?
In the presentation, Rick Rashid showed the audience a demo of what the Microsoft voice translator could do: the chief research officer spoke in English, while the program translated his words into perfectly correct Chinese characters and then, to the audience’s delight and amazement, it reproduced them in a similar voice. Microsoft has broken the ice once again with an initiative that promises to break language barriers. However, as Rick Rashid put it himself, the program is far from perfect and it takes many years of research and improvements to manufacture software similar to the one in Star Trek. Microsoft representatives gave little information about the full capabilities of this project, but the idea raised everyone’s interest.
The idea of an automatic translator is not old and in the past years we’ve seen similar ambitious projects: Google translate evolved quite a lot and, although far from being perfect, it can still ease a lot of a translator’s work. Similarly, Apple’s Siri project was surprisingly effective even though, again, it has many aspects that need to be improved. In fact, everyone who dreams of voice translation software has a major impediment to face: recognizing human intonation and language patterns. Automatic translators, at least with the present technology, give poor results when it comes to collocations, complex structures or words with different meanings. Even something as simple as a comma can completely change the meaning of a sentence and if the human brain can detect and quickly react to these subtle nuances – conference interpreters being the perfect example – computer programs often fail.
But even those with a very traditionalist approach have to admit that in the past few years automatic translation software has got to a level we once believed impossible. And Microsoft’s program opens new possibilities. Perhaps this century is not the time when voice translators will be perfected, but if technology continues to develop at such a fast pace, we have reasons to believe that at some point in the future computers will facilitate communication. Until then, we have to rely on human translators to break language barriers and convey information in the most accurate form.