The first ideas about interlingual machine translation appeared in the 17th century with
Descartes and
Leibniz, who came up with theories of how to create dictionaries using universal numerical codes, not unlike numerical tokens used by
large language models nowadays. Others, such as
Cave Beck,
Athanasius Kircher and
Johann Joachim Becher worked on developing an unambiguous universal language based on the principles of
logic and iconographs. In 1668,
John Wilkins described his interlingua in his "Essay towards a Real Character and a Philosophical Language". In the 18th and 19th centuries many proposals for "universal" international languages were developed, the most well known being
Esperanto. That said, applying the idea of a universal language to machine translation did not appear in any of the first significant approaches. Instead, work started on pairs of languages. However, during the 1950s and 60s, researchers in
Cambridge headed by
Margaret Masterman, in
Leningrad headed by
Nikolai Andreev and in
Milan by
Silvio Ceccato started work in this area. The idea was discussed extensively by the Israeli philosopher
Yehoshua Bar-Hillel in 1969. During the 1970s, noteworthy research was done in
Grenoble by researchers attempting to translate physics and mathematical texts from
Russian to
French, and in
Texas a similar project (METAL) was ongoing for Russian to
English. Early interlingual MT systems were also built at
Stanford in the 1970s by
Roger Schank and
Yorick Wilks; the former became the basis of a commercial system for the transfer of funds, and the latter's code is preserved at
The Computer Museum at
Boston as the first interlingual machine translation system. In the 1980s, renewed relevance was given to interlingua-based, and knowledge-based approaches to machine translation in general, with much research going on in the field. The uniting factor in this research was that high-quality translation required abandoning the idea of requiring total comprehension of the text. Instead, the translation should be based on linguistic knowledge and the specific domain in which the system would be used. The most important research of this era was done in
distributed language translation (DLT) in
Utrecht, which worked with a modified version of
Esperanto, and the Fujitsu system in Japan. In 2016,
Google Neural Machine Translation achieved "zero-shot translation", that is it directly translates one language into another. For example, it might be trained just for Japanese-English and Korean-English translation, but can perform Japanese-Korean translation. The system appears to have learned to produce a language-independent intermediate representation of language (an "interlingua"), which allows it to perform zero-shot translation by converting from and to the interlingua. ==Outline==