(redirected from Unicode Standard)
Also found in: Dictionary.


(yo͞o`nĭkōd'), set of codes used to represent letters, numbers, control characters, and the like, designed for use internationally in computerscomputer,
device capable of performing a series of arithmetic or logical operations. A computer is distinguished from a calculating machine, such as an electronic calculator, by being able to store a computer program (so that it can repeat its operations and make logical
..... Click the link for more information.
. It has been expanded to include such items as scientific, mathematical, and technical symbols, and even musical notation. The Unicode standard defines codes for linguistic symbols used in every major language written today. It includes the Latin alphabet used for English, the Cyrillic alphabet used for Russian, the Greek, Hebrew, and Arabic alphabets, and other alphabets and alphabetlike writing systems used in countries across Europe, Africa, the Indian subcontinent, and Asia, such as Japanese kana, Korean hangeul, and Chinese bopomofo. A large part of the Unicode standard is devoted to thousands of unified character codes for Chinese, Japanese, and Korean ideographs. Adopted as an international standard in 1992, Unicode was originally a "double-byte," or 16-digit, binary number (see numerationnumeration,
in mathematics, process of designating numbers according to any particular system; the number designations are in turn called numerals. In any place value system of numeration, a base number must be specified, and groupings are then made by powers of the base number.
..... Click the link for more information.
) code that could represent up to 65,536 items. No longer limited to 16 bits, it can now represent about one million code positions using three encoding forms called Unicode Transformation Formats (UTF). UTF-8, which consists of one-, two-, three-, and four-byte codes, is used extensively in World Wide WebWorld Wide Web
(WWW or W3), collection of globally distributed text and multimedia documents and files and other network services linked in such a way as to create an immense electronic library from which information can be retrieved quickly by intuitive searches.
..... Click the link for more information.
 applications; UTF-16, which consists of two- and four-byte codes, is used primarily for data storage and text processing; and UTF-32, which consists of four-byte codes, is used where character handling must be as efficient as possible. See also ASCIIASCII
or American Standard Code for Information Interchange,
a set of codes used to represent letters, numbers, a few symbols, and control characters. Originally designed for teletype operations, it has found wide application in computers.
..... Click the link for more information.
The Columbia Electronic Encyclopedia™ Copyright © 2013, Columbia University Press. Licensed from Columbia University Press. All rights reserved. www.cc.columbia.edu/cu/cup/


Computing a character set for all languages
Collins Discovery Encyclopedia, 1st edition © HarperCollins Publishers 2005


1. <character> A 16-bit character set standard, designed and maintained by the non-profit consortium Unicode Inc.

Originally Unicode was designed to be universal, unique, and uniform, i.e., the code was to cover all major modern written languages (universal), each character was to have exactly one encoding (unique), and each character was to be represented by a fixed width in bits (uniform).

Parallel to the development of Unicode an ISO/IEC standard was being worked on that put a large emphasis on being compatible with existing character codes such as ASCII or ISO Latin 1. To avoid having two competing 16-bit standards, in 1992 the two teams compromised to define a common character code standard, known both as Unicode and BMP.

Since the merger the character codes are the same but the two standards are not identical. The ISO/IEC standard covers only coding while Unicode includes additional specifications that help implementation.

Unicode is not a glyph encoding. The same character can be displayed as a variety of glyphs, depending not only on the font and style, but also on the adjacent characters. A sequence of characters can be displayed as a single glyph or a character can be displayed as a sequence of glyphs. Which will be the case, is often font dependent.

See also J?rgen Bettels and F. Avery Bishop's paper Unicode: A universal character code.
This article is provided by FOLDOC - Free Online Dictionary of Computing (foldoc.org)


A character code that defines every character in most of the speaking languages in the world. Although commonly thought to be only a two-byte coding system, Unicode characters can use only one byte, or up to four bytes, to hold a Unicode "code point" (see below). The code point is a unique number for a character or some symbol such as an accent mark or ligature. Unicode supports more than a million code points, which are written with a "U" followed by a plus sign and the number in hex; for example, the word "Hello" is written U+0048 U+0065 U+006C U+006C U+006F (see hex chart).

Character Encoding Schemes
There are several formats for storing Unicode code points. When combined with the byte order of the hardware (big endian or little endian), they are known officially as "character encoding schemes." They are also known by their UTF acronyms, which stand for "Unicode Transformation Format" or "Universal Character Set Transformation Format."

UTF-8, 16 and 32
The UTF-8 coding scheme is widely used because words from multiple languages and every type of symbol can be mixed together in the same message without having to reserve multiple bytes for every character as in UTF-16 or UTF-32. With UTF-8, if only ASCII text is required, a single byte is used per character with the high-order bit set to 0. If non-ASCII characters require more than one byte, the high-order 1 bits of the byte define how many bytes are used. See byte order, DBCS and emoji.

Unicode   ISO       NumberCoding    10646      ofScheme  Equivalent  Bytes  Order**

 UTF-8               1-4    BE or LE

 UTF-16    (UCS-2)   2      BE or LE
 UTF-16BE  (UCS-2)   2      BE
 UTF-16LE  (UCS-2)   2      LE

 UTF-32    (UCS-4)   4      BE or LE
 UTF-32BE  (UCS-4)   4      BE
 UTF-32LE  (UCS-4)   4      LE

 (compatible with early 7-bit
   email systems)

 UTF-7               1-4    BE or LE

 **Byte Order (see  byte order)
   BE = big endian
   LE = little endian
Copyright © 1981-2019 by The Computer Language Company Inc. All Rights reserved. THIS DEFINITION IS FOR PERSONAL USE ONLY. All other reproduction is strictly prohibited without permission from the publisher.
References in periodicals archive ?
Unicode Standard. In the digital text processing system, the Unicode standard to process and display digital texts from 1987 until now has been defined.
(Ziljak, V., Pap, K., Ziljak, D., 1990) These Croatian letters were the first attempt to design diacritics by Croatian authors and remained in use until large corporations, such as Microsoft and Adobe, concerted to the Unicode standard, beginning to include letters with diacritics in some of the fonts that they produced.
This font is based on the idea of Character Composition and hence to be practical, needs support of Nuqta in the Unicode Standard.
" The symbol will be included in the Unicode Standard for representation and processing of text written in major scripts of the world to ensure that ( it) C* is easily shown and printed," the spokesperson said in New Delhi.
Coverage encompasses fundamental Java elements, object-oriented basic concepts, and Java-related platforms and technologies, with appendices on the Unicode standard and bracket conventions.
The Unicode Standard has been adopted by such industry leaders as Apple, HP, IBM, JustSystem, Microsoft, Oracle, SAP, Sun, Sybase, Unisys and others.
Based in the USA, and with organizations such as Microsoft and International Business Machines (IBM) as members, Unicode Standard defines how characters and sounds of different languages are represented in modern software products and standards.
One of the key principles of the Unicode standard is that Unicode encodes characters rather than glyphs.
The Unicode standard that incorporates all the computer characters in the world was employed in Fortran 2003 so that it could be used to process problems conceived in nearly every spoken language.
The keyword service developed by 3721 NSC uses the much broader Unicode standard, allowing Chinese users to address web sites in their native language.
The internationally recognized Unicode standard is a new system for coding character sets, or in laymen's terms, letters of the alphabet.