MarketInteger (computer science)
Company Profile

Integer (computer science)

In computer science, an integer is a datum of integral data type, a data type that represents some range of mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware nearly always provides a way to represent a processor register or memory address as an integer.

Value and representation
The value of an item with an integral type is the mathematical integer that it corresponds to. Integral types may be unsigned (capable of representing only non-negative integers) or signed (capable of representing negative integers as well). An integer value is typically specified in the source code of a program as a sequence of digits optionally prefixed with + or −. Some programming languages allow other notations, such as hexadecimal (base 16) or octal (base 8). Some programming languages also permit digit group separators. The internal representation of this datum is the way the value is stored in the computer's memory. Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value. The most common representation of a positive integer is a string of bits, using the binary numeral system. The order of the memory bytes storing the bits varies; see endianness. The width, precision, or bitness of an integral type is the number of bits in its representation. An integral type with n bits can encode 2n numbers; for example an unsigned type typically represents the non-negative values 0 through . Other encodings of integer values to bit patterns are sometimes used, for example binary-coded decimal or Gray code, or as printed character codes such as ASCII. There are four well-known ways to represent signed numbers in a binary computing system. The most common is two's complement, which allows a signed integral type with n bits to represent numbers from through . Two's complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values (in particular, no separate +0 and −0), and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types. Other possibilities include offset binary, sign-magnitude, and ones' complement. Some computer languages define integer sizes in a machine-independent way; others have varying definitions depending on the underlying processor word size. Not all language implementations define variables of all integer sizes, and defined sizes may not even be distinct in a particular implementation. An integer in one programming language may be a different size in a different language, on a different processor, or in an execution context of different bitness; see . Some older computer architectures used decimal representations of integers, stored in binary-coded decimal (BCD) or other format. These values generally require data sizes of 4 bits per decimal digit (sometimes called a nibble), usually with additional bits for a sign. Many modern CPUs provide limited support for decimal integers as an extended datatype, providing instructions for converting such values to and from binary values. Depending on the architecture, decimal integers may have fixed sizes (e.g., 7 decimal digits plus a sign fit into a 32-bit word), or may be variable-length (up to some maximum digit size), typically occupying two digits per byte (octet). == Common integral data types ==
Common integral data types
Different CPUs support different integral data types. Typically, hardware will support both signed and unsigned types, but only a small, fixed set of widths. The table above lists integral type widths that are supported in hardware by common processors. High-level programming languages provide more possibilities. It is common to have a 'double width' integral type that has twice as many bits as the biggest hardware-supported type. Many languages also have bit-field types (a specified number of bits, usually constrained to be less than the maximum hardware-supported width) and range types (that can represent only the integers in a specified range). Some languages, such as Lisp, Smalltalk, REXX, Haskell, Python, and Raku, support arbitrary precision integers (also known as infinite precision integers or bignums). Other languages that do not support this concept as a top-level construct may have libraries available to represent very large numbers using arrays of smaller variables, such as Java's class or Perl's "bigint" package. These use as much of the computer's memory as is necessary to store the numbers; however, a computer has only a finite amount of storage, so they, too, can only represent a finite subset of the mathematical integers. These schemes support very large numbers; for example one kilobyte of memory could be used to store numbers up to 2466 decimal digits long. A Boolean type is a type that can represent only two values: 0 and 1, usually identified with false and true respectively. This type can be stored in memory using a single bit, but is often given a full byte for convenience of addressing and speed of access. A four-bit quantity is known as a nibble (when eating, being smaller than a bite) or nybble (being a pun on the form of the word byte). One nibble corresponds to one digit in hexadecimal and holds one digit or a sign code in binary-coded decimal. Bytes and octets The term byte initially meant 'the smallest addressable unit of memory'. In the past, 5-, 6-, 7-, 8-, and 9-bit bytes have all been used. There have also been computers that could address individual bits ('bit-addressed machine'), or that could only address 16- or 32-bit quantities ('word-addressed machine'). The term byte was usually not used at all in connection with bit- and word-addressed machines. The term octet always refers to an 8-bit quantity. It is mostly used in the field of computer networking, where computers with different byte widths might have to communicate. In modern usage byte almost invariably means eight bits, since all other sizes have fallen into disuse; thus byte has come to be synonymous with octet. Words The term 'word' is used for a small group of bits that are handled simultaneously by processors of a particular architecture. The size of a word is thus CPU-specific. Many different word sizes have been used, including 6, 8, 12, 16, 18, 24, 32, 36, 39, 40, 48, 60, and 64 bits. Since it is architectural, the size of a word is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS. Standard integer The standard integer size is platform-dependent. In C, it is denoted by and required to be at least 16 bits. Windows and Unix systems have 32-bit s on both 32-bit and 64-bit architectures. Short integer A short integer can represent a whole number that may take less storage, while having a smaller range, compared with a standard integer on the same machine. In C, it is denoted by . It is required to be at least 16 bits, and is often smaller than a standard integer, but this is not required. and , but it may not assume that the range is not larger. In Java, a is a 16-bit integer. In the Windows API, the datatype is defined as a 16-bit signed integer on all machines. This can be an issue when exchanging code and data between platforms, or doing direct hardware access. Thus, there are several sets of headers providing platform independent exact width types. The C standard library provides stdint.h|; this was introduced in C99 and C++11. == Syntax ==
Syntax
Integer literals can be written as regular Arabic numerals, consisting of a sequence of digits and with negation indicated by a minus sign before the value. However, most programming languages disallow use of commas or spaces for digit grouping. Examples of integer literals are: • 42 • 10000 • -233000 There are several alternate methods for writing integer literals in many programming languages: • Many programming languages, especially those influenced by C, prefix an integer literal with 0X or 0x to represent a hexadecimal value, e.g. 0xDEADBEEF. Other languages may use a different notation, e.g. some assembly languages append an H or h to the end of a hexadecimal value. • Perl, Ruby, Java, Julia, D, Go, C#, Rust, Python (starting from version 3.6), and PHP (from version 7.4.0 onwards) allow embedded underscores for clarity, e.g. 10_000_000, and fixed-form Fortran ignores embedded spaces in integer literals. C (starting from C23) and C++ use single quotes for this purpose. • In C and C++, a leading zero indicates an octal value, e.g. 0755. This was primarily intended to be used with Unix modes; however, it has been criticized because normal integers may also lead with zero. As such, Python, Ruby, Haskell, and OCaml prefix octal values with 0O or 0o, following the layout used by hexadecimal values. • Several languages, including Java, C#, Scala, Python, Ruby, OCaml, C (starting from C23) and C++ can represent binary values by prefixing a number with 0B or 0b. == Extreme values ==
Extreme values
In many programming languages, there exist predefined constants representing the least and the greatest values representable with a given integer type. Names for these include • SmallBASIC: • Java: , • : Corresponding fields exist for the other integer classes in Java. • C: , , etc. • GLib: , , , ... • C++: , , etc. • Haskell: , • Pascal: • Python 2: • Python 3: • Rust: , , etc. • Turing: == See also ==
tickerdossier.comtickerdossier.substack.com