Unicode

(redirected from 16-bit characters)
Also found in: Dictionary, Wikipedia.

Unicode

(yo͞o`nĭkōd'), set of codes used to represent letters, numbers, control characters, and the like, designed for use internationally in computerscomputer,
device capable of performing a series of arithmetic or logical operations. A computer is distinguished from a calculating machine, such as an electronic calculator, by being able to store a computer program (so that it can repeat its operations and make logical
..... Click the link for more information.
. It has been expanded to include such items as scientific, mathematical, and technical symbols, and even musical notation. The Unicode standard defines codes for linguistic symbols used in every major language written today. It includes the Latin alphabet used for English, the Cyrillic alphabet used for Russian, the Greek, Hebrew, and Arabic alphabets, and other alphabets and alphabetlike writing systems used in countries across Europe, Africa, the Indian subcontinent, and Asia, such as Japanese kana, Korean hangeul, and Chinese bopomofo. A large part of the Unicode standard is devoted to thousands of unified character codes for Chinese, Japanese, and Korean ideographs. Adopted as an international standard in 1992, Unicode was originally a "double-byte," or 16-digit, binary number (see numerationnumeration,
in mathematics, process of designating numbers according to any particular system; the number designations are in turn called numerals. In any place value system of numeration, a base number must be specified, and groupings are then made by powers of the base number.
..... Click the link for more information.
) code that could represent up to 65,536 items. No longer limited to 16 bits, it can now represent about one million code positions using three encoding forms called Unicode Transformation Formats (UTF). UTF-8, which consists of one-, two-, three-, and four-byte codes, is used extensively in World Wide WebWorld Wide Web
(WWW or W3), collection of globally distributed text and multimedia documents and files and other network services linked in such a way as to create an immense electronic library from which information can be retrieved quickly by intuitive searches.
..... Click the link for more information.
 applications; UTF-16, which consists of two- and four-byte codes, is used primarily for data storage and text processing; and UTF-32, which consists of four-byte codes, is used where character handling must be as efficient as possible. See also ASCIIASCII
or American Standard Code for Information Interchange,
a set of codes used to represent letters, numbers, a few symbols, and control characters. Originally designed for teletype operations, it has found wide application in computers.
..... Click the link for more information.
.

Unicode

Computing a character set for all languages

Unicode

1. <character> A 16-bit character set standard, designed and maintained by the non-profit consortium Unicode Inc.

Originally Unicode was designed to be universal, unique, and uniform, i.e., the code was to cover all major modern written languages (universal), each character was to have exactly one encoding (unique), and each character was to be represented by a fixed width in bits (uniform).

Parallel to the development of Unicode an ISO/IEC standard was being worked on that put a large emphasis on being compatible with existing character codes such as ASCII or ISO Latin 1. To avoid having two competing 16-bit standards, in 1992 the two teams compromised to define a common character code standard, known both as Unicode and BMP.

Since the merger the character codes are the same but the two standards are not identical. The ISO/IEC standard covers only coding while Unicode includes additional specifications that help implementation.

Unicode is not a glyph encoding. The same character can be displayed as a variety of glyphs, depending not only on the font and style, but also on the adjacent characters. A sequence of characters can be displayed as a single glyph or a character can be displayed as a sequence of glyphs. Which will be the case, is often font dependent.

See also J?rgen Bettels and F. Avery Bishop's paper Unicode: A universal character code.

Unicode

A character code that defines every character in most of the speaking languages in the world. Although commonly thought to be only a two-byte coding system, Unicode characters can use only one byte, or up to four bytes, to hold a Unicode "code point" (see below). The code point is a unique number for a character or some symbol such as an accent mark or ligature. Unicode supports more than a million code points, which are written with a "U" followed by a plus sign and the number in hex; for example, the word "Hello" is written U+0048 U+0065 U+006C U+006C U+006F (see hex chart).

Character Encoding Schemes
There are several formats for storing Unicode code points. When combined with the byte order of the hardware (big endian or little endian), they are known officially as "character encoding schemes." They are also known by their UTF acronyms, which stand for "Unicode Transformation Format" or "Universal Character Set Transformation Format."

UTF-8, 16 and 32
The UTF-8 coding scheme is widely used because words from multiple languages and every type of symbol can be mixed together in the same message without having to reserve multiple bytes for every character as in UTF-16 or UTF-32. With UTF-8, if only ASCII text is required, a single byte is used per character with the high-order bit set to 0. If non-ASCII characters require more than one byte, the high-order 1 bits of the byte define how many bytes are used. See byte order, DBCS and emoji.

Unicode   ISO       NumberCoding    10646      ofScheme  Equivalent  Bytes  Order**

 UTF-8               1-4    BE or LE

 UTF-16    (UCS-2)   2      BE or LE
 UTF-16BE  (UCS-2)   2      BE
 UTF-16LE  (UCS-2)   2      LE

 UTF-32    (UCS-4)   4      BE or LE
 UTF-32BE  (UCS-4)   4      BE
 UTF-32LE  (UCS-4)   4      LE


 Pure ASCII
 (compatible with early 7-bit
   e-mail systems)

 UTF-7               1-4    BE or LE


 **Byte Order (see  byte order)
   BE = big endian
   LE = little endian