Saturday, December 3, 2005

Web Internationalization [I18N]: Part III

Previously in Part II, we discussed the basic terms, with this basic concepts, we can discuss the complex part - Character Encoding.

//Today, I finally resolved the last few confused questions about Character Encoding, and can continue to write this article. :p

//Continued @2005.12.18

Now, let's see what the character encoding is.

At the very beginning of computer science, character encoding was created. And, after computer is born, it bring the first famous character encoding - ASCII. Actually, ASCII is the abbreviation of America Standard Code for Information Interchange. That's define how the computer recognize those 26 English letters and some other control code.

That's the exact point!

Character Encoding means given each character a unique number, and also each one of us accept it and each computer can accept it. Accept means when the computer read that code in binary, it knows that which character it presents.

And after the computer is expressed to the whole world, those people who do not use English as their native language have to face a fact that they can not add other unique number to ASCII for ASCII using 7bit to mark those characters who believe will be enough for American people. Even when ASCII is grown to 8bit, that means 1 byte, it also can only contain 256 characters. How could other people outside America to use their own characters in computer? That's the right question and which bring an answer: Double Bytes.

Yeah, Double Bytes, the evil of clobber( or messed code?)

Yet but they are also the key to resolve the clobber (messed code / scrambled code).


No comments:

Post a Comment