AI is often touted as the next big thing in the world of technology. It can already recognize music better than humans can and is heading towards mainstream use. Speech comprehension and translation is an important step towards realizing an AI powered future.
Facebook’s new research using a language translation AI has yielded interesting results. Using convolutional neural networks (CNN), it makes full use of parallel processing to complete difficult/complex tasks. The company’s AI research team shared research papers that showed their system outperforming traditional language translation software by 9 times.
The source code and trained systems used by the researchers are available for anyone to use (open source) allowing anyone to replicate their results.
RNN vs CNN Systems
Languages are typically translated in computers through recurrent neural networks (RNN). These translate sentences one word at a time (similar to Google Translate of old). The CNN based translator provides context by looking at words further along a sentence.
This is very similar to how we humans translate languages. Normally RNN based systems are fine for end users but it had its limits. This is the first time CNN based systems have outperformed RNN ones in terms of speed.
Facebook is hoping to further enhance this methodology to cover more than 6500 languages.
Google has been improving their translation systems as well. Google Translate is infamous for directly translating phrases from one language to another, resulting in hilarious responses (for example “let us hang out” translated in Urdu). They also used an RNN based translation algorithm and have recently switched over to what they call the “Neural Machine Translation”.
It improves with each translation it does, just like an AI. For now it supports nine languages including English and French, German, Spanish, Portuguese, Chinese, Japanese, Korean and Turkish.