American Standard Code for Information Interchange

(redirected from ASCII table)
Also found in: Dictionary, Thesaurus.

American Standard Code for Information Interchange:

see ASCIIASCII
or American Standard Code for Information Interchange,
a set of codes used to represent letters, numbers, a few symbols, and control characters. Originally designed for teletype operations, it has found wide application in computers.
..... Click the link for more information.
.
The Columbia Electronic Encyclopedia™ Copyright © 2013, Columbia University Press. Licensed from Columbia University Press. All rights reserved. www.cc.columbia.edu/cu/cup/

American Standard Code for Information Interchange

[ə′mer·ə·kən ′stan·dərd ′kōd fər in·fər′mā·shən ′in·tər‚chānj]
(communications)
Coded character set to be used for the general interchange of information among information-processing systems, communications systems, and associated equipment; the standard code, comprising characters 0 through 127, includes control codes, upper- and lower-case letters, numerals, punctuation marks, and commonly used symbols; an additional set is known as extended ASCII. Abbreviated ASCII.
McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.

American Standard Code for Information Interchange

The basis of character sets used in almost all present-day computers. US-ASCII uses only the lower seven bits (character points 0 to 127) to convey some control codes, space, numbers, most basic punctuation, and unaccented letters a-z and A-Z. More modern coded character sets (e.g., Latin-1, Unicode) define extensions to ASCII for values above 127 for conveying special Latin characters (like accented characters, or German ess-tsett), characters from non-Latin writing systems (e.g., Cyrillic, or Han characters), and such desirable glyphs as distinct open- and close-quotation marks. ASCII replaced earlier systems such as EBCDIC and Baudot, which used fewer bytes, but were each broken in their own way.

Computers are much pickier about spelling than humans; thus, hackers need to be very precise when talking about characters, and have developed a considerable amount of verbal shorthand for them. Every character has one or more names - some formal, some concise, some silly.

Individual characters are listed in this dictionary with alternative names from revision 2.3 of the Usenet ASCII pronunciation guide in rough order of popularity, including their official ITU-T names and the particularly silly names introduced by INTERCAL.

See V ampersand, asterisk, back quote, backslash, caret, colon, comma, commercial at, control-C, dollar, dot, double quote, equals, exclamation mark, greater than, hash, left bracket, left parenthesis, less than, minus, parentheses, oblique stroke, percent, plus, question mark, right brace, right brace, right bracket, right parenthesis, semicolon, single quote, space, tilde, underscore, vertical bar, zero.

Some other common usages cause odd overlaps. The "#", "$", ">", and "&" characters, for example, are all pronounced "hex" in different communities because various assemblers use them as a prefix tag for hexadecimal constants (in particular, "#" in many assembler-programming cultures, "$" in the 6502 world, ">" at Texas Instruments, and "&" on the BBC Micro, Acorn Archimedes, Sinclair, and some Zilog Z80 machines). See also splat.

The inability of US-ASCII to correctly represent nearly any language other than English became an obvious and intolerable misfeature as computer use outside the US and UK became the rule rather than the exception (see software rot). And so national extensions to US-ASCII were developed, such as Latin-1.

Hardware and software from the US still tends to embody the assumption that US-ASCII is the universal character set and that words of text consist entirely of byte values 65-90 and 97-122 (A-Z and a-z); this is a major irritant to people who want to use a character set suited to their own languages. Perversely, though, efforts to solve this problem by proliferating sets of national characters produced an evolutionary pressure (especially in protocol design, e.g., the URL standard) to stick to US-ASCII as a subset common to all those in use, and therefore to stick to English as the language encodable with the common subset of all the ASCII dialects. This basic problem with having a multiplicity of national character sets ended up being a prime justification for Unicode, which was designed, ostensibly, to be the *one* ASCII extension anyone will need.

A system is described as "eight-bit clean" if it doesn't mangle text with byte values above 127, as some older systems did.

See also ASCII character table, Yu-Shiang Whole Fish.
This article is provided by FOLDOC - Free Online Dictionary of Computing (foldoc.org)

ASCII

(American Standard Code for Information Interchange) Pronounced "ask-ee," it is the built-in binary code for representing characters in all computers except IBM mainframes, which use the EBCDIC coding system. ASCII was originally developed for communications and uses only seven bits per character, providing 128 combinations that include upper and lower case alphabetic letters, the numeric digits and special symbols such as the $ and %. The first 32 characters are set aside for communications and printer control (see ASCII chart).

A Byte Holds ASCII and Then Some
Since the common storage unit in a computer is an 8-bit byte (256 character combinations) and ASCII uses only the first 128 (0-127), the second set of 128 characters (128-255) are technically not ASCII, but are typically used for foreign language and math symbols. In the first PCs running DOS, they also contained elementary graphics symbols. In the Mac, the additional values can be defined by the user.

ASCII vs. Hex
In technical applications typically used by developers, you may have a choice between entering data in ASCII or "hex" for editing or searching. ASCII is entered by typing in regular text, but because there are not enough keys on the keyboard to enter 256 distinct characters, the hexadecimal (hex) numbering system is used. Hex is entered by typing only the digits 0 to 9 or the letters A to F, and it provides a precise way of defining any of the 256 possible combinations in the byte, whether they be control codes (0-31) or the last 128 (128-255). See hex chart, ASCII file and Unicode.
Copyright © 1981-2019 by The Computer Language Company Inc. All Rights reserved. THIS DEFINITION IS FOR PERSONAL USE ONLY. All other reproduction is strictly prohibited without permission from the publisher.