Scanner The first stage, the
scanner, is usually based on a
finite-state machine (FSM). It has encoded within it information on the possible sequences of characters that can be contained within any of the tokens it handles (individual instances of these character sequences are termed
lexemes). For example, an
integer lexeme may contain any sequence of
numerical digit characters. In many cases, the first non-whitespace character can be used to deduce the kind of token that follows and subsequent input characters are then processed one at a time until reaching a character that is not in the set of characters acceptable for that token (this is termed the
maximal munch, or
longest match, rule). In some languages, the lexeme creation rules are more complex and may involve
backtracking over previously read characters. For example, in C, one 'L' character is not enough to distinguish between an identifier that begins with 'L' and a wide-character string literal.
Evaluator A
lexeme, however, is only a string of characters known to be of a certain kind (e.g., a string literal, a sequence of letters). The second stage of a lexical analyzer, the
evaluator, goes over the characters of the lexeme to produce a
value containing relevant information for the parser. The lexeme's type combined with its value is what properly constitutes a
token. The value in the token can be whatever is deemed necessary for the parser to interpret a token of that type. Some examples of typical values produced by an evaluator include: • A token for an identifier will often simply contain the characters of the associated lexeme. • Token values for keywords and special characters are usually omitted, as the type alone contains all the information needed. • Evaluators processing
integer literals may pass the string on as is (deferring evaluation to the semantic analysis phase), or may perform evaluation themselves to produce numeric values. • For a simple quoted string literal, the evaluator needs to remove only the quotes, but the evaluator for an
escaped string literal may also incorporate a lexer, which unescapes the escape sequences. The evaluator may also suppress a lexeme entirely, concealing it from the parser, which is useful for whitespace and comments. For example, in the source code of a computer program, the string : might be converted into the following lexical token stream; where each line represents a token composed of a TYPE followed by an optional value: IDENTIFIER "net_worth_future" EQUALS OPEN_PARENTHESIS IDENTIFIER "assets" MINUS IDENTIFIER "liabilities" CLOSE_PARENTHESIS SEMICOLON Lexers may be written by hand. This is practical if the list of tokens is small, but lexers generated by automated tooling as part of a
compiler-compiler toolchain are more practical for a larger number of potential tokens. These tools generally accept regular expressions that describe the tokens allowed in the input stream. Each regular expression is associated with a
production rule in the lexical grammar of the programming language that evaluates the lexemes matching the regular expression. These tools may generate source code that can be compiled and executed or construct a
state transition table for a
finite-state machine (which is plugged into template code for compiling and executing). Regular expressions compactly represent patterns that the characters in lexemes might follow. For example, for an
English-based language, an IDENTIFIER token might be any English alphabetic character or an underscore, followed by any number of instances of ASCII alphanumeric characters and/or underscores. This could be represented compactly by the string . This means "any character a-z, A-Z or _, followed by 0 or more of a-z, A-Z, _ or 0-9". Regular expressions and the finite-state machines they generate are not powerful enough to handle recursive patterns, such as "
n opening parentheses, followed by a statement, followed by
n closing parentheses." They are unable to keep count, and verify that
n is the same on both sides, unless a finite set of permissible values exists for
n. It takes a full parser to recognize such patterns in their full generality. A parser can push parentheses on a stack and then try to pop them off and see if the stack is empty at the end (see example in the
Structure and Interpretation of Computer Programs book). == Obstacles ==