William Safire's muddled orthographical Esperanto
Transliteration refers to conversion between phonetic alphabets. In Japanese I write ��, and I can transliterate this into the Roman alphabet as hana. In English I write cherry, and I can transliterate this into the Japanese katakana syllabary as �ェリー.
Transliteration tries to accomplish two quite different things. The first is to write a word in another alphabet so that when it is pronounced according to the rules of the language using that alphabet, it sounds as much as possible “like” the original word in its original language. The second is to provide a unique, bidirectional orthographical mapping from one alphabet (that I don’t know) to another (that I do). Among other reasons, when entering text into a computer this lets me use a keyboard mapping I am more familiar with. Even many Japanese prefer to input Japanese content using the Roman alphabet keyboard mapping.
(There is a third, less important goal in some transliteration systems: to reproduce structural aspects of the original alphabet. The example I’m familiar with is Japanese. The syllabary is organized into rows (vowels) and columns (consonants). The “ha” column contains ha, hi, hu, he, and ho. The “hu” sound is perceived by most English speakers as being closer to “fu”. Thus, a transliteration system which emphasized phonological fidelity would represent this syllable as “fu”, whereas one emphasizing source-alphabet structural integrity would represent it as “hu”. Does this problem exist in other transliteration systems?)
The above is just a basic introduction to transliteration; another is at Wikipedia. What motivated me to post about the topic is the horribly garbled discussion that recently appeared in William Safire’s column in the New York Times, our national newspaper of record.
Safire starts off on the wrong foot, revealing a weak understanding of the distinction between orthography and phonology, making absurd statements such as “The closest I can get in Roman spelling [he means English spelling] to the sound of [Putin’s] name is…”. He then lapses into bemusement at the fact that if for some unknown reason the French were to use the English-style transliteration of Russian President Putin’s name, it would come out sounding like the French word for “prostitute”, and so gee, that must be why they adopted their own weird transliteration. How confused this all is is analyzed in detail by our friends at Blogos.
In a follow-up article, Blogos expresses shock that “there are still people out there, writing columns in some of the most influential newspapers in the world, who think that computers and the Internet can only work with roman alphabets”.
But that’s not exactly where our famed pundit is confused, if you read his closing paragraph closely:
Here’s the problem for globocrats: most computer operating systems are based on the Roman alphabet, Maybe the United Nations will find a new raison d’etre (that’s ray-ZON DET-ra) in standardizing a system to encode Roman and Cyrillic letters and Chinese and Japanese characters to make them computer-friendly on all the world’s screens.
Now he’s started talking about “encodings”, something else he plainly does not understand. It turns out there is a widely-implemented encoding making all the world’s characters “computer-friendly”, called Unicode. Clearly Safire has no idea what is going on in multilingual computing, and one must certainly question his judgment in writing such nonsense in a national newspaper without a minute’s worth of checking. The problem is not that these characters cannot be displayed on “all the world’s screens”, since they can; it’s that, once displayed, they still cannot be read by people that don’t know the alphabets. He continues:
…For users of tomorrow’s Internet to accurately cross cultures, experts in phonetics and transliteration will first have to create and agree on a standard system.
Ignoring the fact that Safire now is confusing “cultures” with languages and writing systems, he’s apparently saying that there could be, or should be, some type of orthographical Esperanto that would magically meet the two conflicting objectives of transliteration systems: to be faithful to the original orthography while also being pronounced by native speakers of any world language, according to their language’s phonological rules, in a way which is close to the phonology of the original word. Sorry, all the “transliterati” in the world won’t be able to pull off that trick.
Only then will President Poutine get his real name back.
Bill, he doesn’t need his name back, he never lost it. It’s a Russian name written in Cyrillic. The French didn’t “take it away”, they just tried to write it in their alphabet so people can read it.
So much for our reigning language maven, of whom, I should add, I am a great fan and faithful reader of his column.
April 7th, 2005 at 01:13
Thanks indeed for spotting the added confusion between encoding and transliterating in the second part of the column, which I had overlooked. A case of confusion overload I guess.
And by the way, if the problem for “users of tomorrow’s Internet” is to be able to depend on a reliable global tansliteration system, one may wonder what good it would do them if they don’t speak the language. One can arguably identify a third level of confusion there in Safire’s column, between being able to read the sounds and understanding what they mean.